Variable selection in convex quantile regression: L1-norm or L0-norm regularization?

Loading...
Thumbnail Image
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
Date
2023-02-16
Major/Subject
Mcode
Degree programme
Language
en
Pages
18
338-355
Series
European Journal of Operational Research, Volume 305, issue 1
Abstract
The curse of dimensionality is a recognized challenge in nonparametric estimation. This paper develops a new L0-norm regularization approach to the convex quantile and expectile regressions for subset selection. We show how to use mixed-integer programming to solve the proposed L0-norm regularization approach in practice and build a link to the commonly used L1-norm regularization approach. A Monte Carlo study is performed to compare the finite sample performances of the proposed L0-penalized convex quantile and expectile regression approaches with the L1-norm regularization approaches. The proposed approach is further applied to benchmark the sustainable development performance of the OECD countries and empirically analyze the accuracy in the dimensionality reduction of variables. The results from the simulation and application illustrate that the proposed L0-norm regularization approach can more effectively address the curse of dimensionality than the L0-norm regularization approach in multidimensional spaces.
Description
Keywords
Variable selection, Convex quantile regression, Regularization, SDG evaluation
Other note
Citation
Dai, S 2023, ' Variable selection in convex quantile regression: L1-norm or L0-norm regularization? ', European Journal of Operational Research, vol. 305, no. 1, pp. 338-355 . https://doi.org/10.1016/j.ejor.2022.05.041