Smoothing ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorMirzaeifard, Rezaen_US
dc.contributor.authorVenkategowda, Naveen K.D.en_US
dc.contributor.authorGogineni, Vinay Chakravarthien_US
dc.contributor.authorWerner, Stefanen_US
dc.contributor.departmentDepartment of Information and Communications Engineeringen
dc.contributor.groupauthorRisto Wichman Groupen
dc.contributor.organizationLinköping Universityen_US
dc.contributor.organizationUniversity of Southern Denmarken_US
dc.contributor.organizationNorwegian University of Science and Technologyen_US
dc.date.accessioned2024-01-31T08:24:48Z
dc.date.available2024-01-31T08:24:48Z
dc.date.issued2024en_US
dc.descriptionFunding Information: This work was supported by the Research Council of Norway Publisher Copyright: Authors
dc.description.abstractThis paper investigates quantile regression in the presence of non-convex and non-smooth sparse penalties, such as the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD). The non-smooth and non-convex nature of these problems often leads to convergence difficulties for many algorithms. While iterative techniques such as coordinate descent and local linear approximation can facilitate convergence, the process is often slow. This sluggish pace is primarily due to the need to run these approximation techniques until full convergence at each step, a requirement we term as a secondary convergence iteration. To accelerate the convergence speed, we employ the alternating direction method of multipliers (ADMM) and introduce a novel single-loop smoothing ADMM algorithm with an increasing penalty parameter, named SIAD, specifically tailored for sparse-penalized quantile regression. We first delve into the convergence properties of the proposed SIAD algorithm and establish the necessary conditions for convergence. Theoretically, we confirm a convergence rate of ok14 for the sub-gradient bound of the augmented Lagrangian, where k denotes the number of iterations. Subsequently, we provide numerical results to showcase the effectiveness of the SIAD algorithm. Our findings highlight that the SIAD method outperforms existing approaches, providing a faster and more stable solution for sparse-penalized quantile regression.en
dc.description.versionPeer revieweden
dc.format.extent16
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationMirzaeifard, R, Venkategowda, N K D, Gogineni, V C & Werner, S 2024, 'Smoothing ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penalties', IEEE Open journal of Signal Processing, vol. 5, pp. 213-228. https://doi.org/10.1109/OJSP.2023.3344395en
dc.identifier.doi10.1109/OJSP.2023.3344395en_US
dc.identifier.issn2644-1322
dc.identifier.otherPURE UUID: d73a65c0-8975-4065-bc05-a0f820bed400en_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/d73a65c0-8975-4065-bc05-a0f820bed400en_US
dc.identifier.otherPURE LINK: http://www.scopus.com/inward/record.url?scp=85181827669&partnerID=8YFLogxK
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/135307728/Smoothing_ADMM_for_Sparse-Penalized_Quantile_Regression_with_Non-Convex_Penalties.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/126597
dc.identifier.urnURN:NBN:fi:aalto-202401312264
dc.language.isoenen
dc.publisherIEEE
dc.relation.ispartofseriesIEEE Open journal of Signal Processingen
dc.relation.ispartofseriesVolume 5, pp. 213-228en
dc.rightsopenAccessen
dc.subject.keywordADMMen_US
dc.subject.keywordConvergenceen_US
dc.subject.keywordConvex functionsen_US
dc.subject.keywordnon-smooth and non-convex penaltiesen_US
dc.subject.keywordOptimizationen_US
dc.subject.keywordPrediction algorithmsen_US
dc.subject.keywordQuantile regressionen_US
dc.subject.keywordSignal processingen_US
dc.subject.keywordSignal processing algorithmsen_US
dc.subject.keywordSmoothing methodsen_US
dc.subject.keywordsparse learningen_US
dc.titleSmoothing ADMM for Sparse-Penalized Quantile Regression with Non-Convex Penaltiesen
dc.typeA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessäfi
dc.type.versionpublishedVersion

Files