Smoothing ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties
dc.contributor | Aalto-yliopisto | fi |
dc.contributor | Aalto University | en |
dc.contributor.author | Mirzaeifard, Reza | en_US |
dc.contributor.author | Venkategowda, Naveen K.D. | en_US |
dc.contributor.author | Gogineni, Vinay Chakravarthi | en_US |
dc.contributor.author | Werner, Stefan | en_US |
dc.contributor.department | Department of Information and Communications Engineering | en |
dc.contributor.groupauthor | Risto Wichman Group | en |
dc.contributor.organization | Linköping University | en_US |
dc.contributor.organization | University of Southern Denmark | en_US |
dc.contributor.organization | Norwegian University of Science and Technology | en_US |
dc.date.accessioned | 2024-01-31T08:24:48Z | |
dc.date.available | 2024-01-31T08:24:48Z | |
dc.date.issued | 2024 | en_US |
dc.description | Funding Information: This work was supported by the Research Council of Norway Publisher Copyright: Authors | |
dc.description.abstract | This paper investigates quantile regression in the presence of non-convex and non-smooth sparse penalties, such as the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD). The non-smooth and non-convex nature of these problems often leads to convergence difficulties for many algorithms. While iterative techniques such as coordinate descent and local linear approximation can facilitate convergence, the process is often slow. This sluggish pace is primarily due to the need to run these approximation techniques until full convergence at each step, a requirement we term as a secondary convergence iteration. To accelerate the convergence speed, we employ the alternating direction method of multipliers (ADMM) and introduce a novel single-loop smoothing ADMM algorithm with an increasing penalty parameter, named SIAD, specifically tailored for sparse-penalized quantile regression. We first delve into the convergence properties of the proposed SIAD algorithm and establish the necessary conditions for convergence. Theoretically, we confirm a convergence rate of ok14 for the sub-gradient bound of the augmented Lagrangian, where k denotes the number of iterations. Subsequently, we provide numerical results to showcase the effectiveness of the SIAD algorithm. Our findings highlight that the SIAD method outperforms existing approaches, providing a faster and more stable solution for sparse-penalized quantile regression. | en |
dc.description.version | Peer reviewed | en |
dc.format.extent | 16 | |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Mirzaeifard, R, Venkategowda, N K D, Gogineni, V C & Werner, S 2024, 'Smoothing ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties', IEEE Open journal of Signal Processing, vol. 5, pp. 213-228. https://doi.org/10.1109/OJSP.2023.3344395 | en |
dc.identifier.doi | 10.1109/OJSP.2023.3344395 | en_US |
dc.identifier.issn | 2644-1322 | |
dc.identifier.other | PURE UUID: d73a65c0-8975-4065-bc05-a0f820bed400 | en_US |
dc.identifier.other | PURE ITEMURL: https://research.aalto.fi/en/publications/d73a65c0-8975-4065-bc05-a0f820bed400 | en_US |
dc.identifier.other | PURE LINK: http://www.scopus.com/inward/record.url?scp=85181827669&partnerID=8YFLogxK | |
dc.identifier.other | PURE FILEURL: https://research.aalto.fi/files/135307728/Smoothing_ADMM_for_Sparse-Penalized_Quantile_Regression_with_Non-Convex_Penalties.pdf | en_US |
dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/126597 | |
dc.identifier.urn | URN:NBN:fi:aalto-202401312264 | |
dc.language.iso | en | en |
dc.publisher | IEEE | |
dc.relation.ispartofseries | IEEE Open journal of Signal Processing | en |
dc.relation.ispartofseries | Volume 5, pp. 213-228 | en |
dc.rights | openAccess | en |
dc.subject.keyword | ADMM | en_US |
dc.subject.keyword | Convergence | en_US |
dc.subject.keyword | Convex functions | en_US |
dc.subject.keyword | non-smooth and non-convex penalties | en_US |
dc.subject.keyword | Optimization | en_US |
dc.subject.keyword | Prediction algorithms | en_US |
dc.subject.keyword | Quantile regression | en_US |
dc.subject.keyword | Signal processing | en_US |
dc.subject.keyword | Signal processing algorithms | en_US |
dc.subject.keyword | Smoothing methods | en_US |
dc.subject.keyword | sparse learning | en_US |
dc.title | Smoothing ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties | en |
dc.type | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä | fi |
dc.type.version | publishedVersion |