Sparsity information and regularization in the horseshoe and other shrinkage priors

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorPiironen, Juhoen_US
dc.contributor.authorVehtari, Akien_US
dc.contributor.departmentDepartment of Computer Scienceen
dc.contributor.groupauthorCentre of Excellence in Computational Inference, COINen
dc.contributor.groupauthorProfessorship Vehtari Akien
dc.contributor.groupauthorHelsinki Institute for Information Technology (HIIT)en
dc.contributor.groupauthorProbabilistic Machine Learningen
dc.date.accessioned2018-02-09T09:53:40Z
dc.date.available2018-02-09T09:53:40Z
dc.date.issued2017en_US
dc.description.abstractThe horseshoe prior has proven to be a noteworthy alternative for sparse Bayesian estimation, but has previously suffered from two problems. First, there has been no systematic way of specifying a prior for the global shrinkage hyperparameter based on the prior information about the degree of sparsity in the parameter vector. Second, the horseshoe prior has the undesired property that there is no possibility of specifying separately information about sparsity and the amount of regularization for the largest coefficients, which can be problematic with weakly identified parameters, such as the logistic regression coefficients in the case of data separation. This paper proposes solutions to both of these problems. We introduce a concept of effective number of nonzero parameters, show an intuitive way of formulating the prior for the global hyperparameter based on the sparsity assumptions, and argue that the previous default choices are dubious based on their tendency to favor solutions with more unshrunk parameters than we typically expect a priori. Moreover, we introduce a generalization to the horseshoe prior, called the regularized horseshoe, that allows us to specify a minimum level of regularization to the largest values. We show that the new prior can be considered as the continuous counterpart of the spike-and-slab prior with a finite slab width, whereas the original horseshoe resembles the spike-and-slab with an infinitely wide slab. Numerical experiments on synthetic and real world data illustrate the benefit of both of these theoretical advances.en
dc.description.versionPeer revieweden
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationPiironen, J & Vehtari, A 2017, 'Sparsity information and regularization in the horseshoe and other shrinkage priors', Electronic Journal of Statistics, vol. 11, no. 2, pp. 5018-5051. https://doi.org/10.1214/17-EJS1337SIen
dc.identifier.doi10.1214/17-EJS1337SIen_US
dc.identifier.issn1935-7524
dc.identifier.otherPURE UUID: 14a8ef3d-47f8-4069-a75a-fbc6ad594f4cen_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/14a8ef3d-47f8-4069-a75a-fbc6ad594f4cen_US
dc.identifier.otherPURE LINK: https://projecteuclid.org/euclid.ejs/1513306866#infoen_US
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/16590023/euclid.ejs.1513306866.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/29742
dc.identifier.urnURN:NBN:fi:aalto-201802091238
dc.language.isoenen
dc.publisherInstitute of Mathematical Statistics
dc.relation.ispartofseriesElectronic Journal of Statisticsen
dc.relation.ispartofseriesVolume 11, issue 2, pp. 5018-5051en
dc.rightsopenAccessen
dc.subject.keywordBayesian inferenceen_US
dc.subject.keywordsparse estimationen_US
dc.subject.keywordshrinkage priorsen_US
dc.subject.keywordhorseshoe prioren_US
dc.titleSparsity information and regularization in the horseshoe and other shrinkage priorsen
dc.typeA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessäfi
dc.type.versionpublishedVersion

Files