Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorBharti, Ayushen_US
dc.contributor.authorNaslidnyk, Mashaen_US
dc.contributor.authorKey, Oscaren_US
dc.contributor.authorKaski, Samuelen_US
dc.contributor.authorBriol, Francois Xavieren_US
dc.contributor.departmentDepartment of Computer Scienceen
dc.contributor.editorKrause, Andreaden_US
dc.contributor.editorBrunskill, Emmaen_US
dc.contributor.editorCho, Kyunghyunen_US
dc.contributor.editorEngelhardt, Barbaraen_US
dc.contributor.editorSabato, Sivanen_US
dc.contributor.editorScarlett, Jonathanen_US
dc.contributor.groupauthorComputer Science Professorsen
dc.contributor.groupauthorComputer Science - Artificial Intelligence and Machine Learning (AIML)en
dc.contributor.groupauthorFinnish Center for Artificial Intelligence, FCAIen
dc.contributor.groupauthorProbabilistic Machine Learningen
dc.contributor.groupauthorHelsinki Institute for Information Technology (HIIT)en
dc.contributor.groupauthorProfessorship Kaski Samuelen
dc.contributor.organizationUniversity College Londonen_US
dc.date.accessioned2023-09-13T06:49:02Z
dc.date.available2023-09-13T06:49:02Z
dc.date.issued2023-07en_US
dc.description.abstractLikelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-m rate, where m is the number of simulated samples. This can lead to significant computational challenges since a large m is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.en
dc.description.versionPeer revieweden
dc.format.extent24
dc.format.extent2289-2312
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationBharti, A, Naslidnyk, M, Key, O, Kaski, S & Briol, F X 2023, Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference . in A Krause, E Brunskill, K Cho, B Engelhardt, S Sabato & J Scarlett (eds), Proceedings of the 40th International Conference on Machine Learning . Proceedings of Machine Learning Research, vol. 202, JMLR, pp. 2289-2312, International Conference on Machine Learning, Honolulu, Hawaii, United States, 23/07/2023 . < https://proceedings.mlr.press/v202/bharti23a.html >en
dc.identifier.issn2640-3498
dc.identifier.otherPURE UUID: c802c5ac-ca5f-4e23-98e1-1936c3929132en_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/c802c5ac-ca5f-4e23-98e1-1936c3929132en_US
dc.identifier.otherPURE LINK: http://www.scopus.com/inward/record.url?scp=85174387250&partnerID=8YFLogxKen_US
dc.identifier.otherPURE LINK: https://proceedings.mlr.press/v202/bharti23a.htmlen_US
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/120697402/SCI_Bharti_etal_ICML_2023.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/123508
dc.identifier.urnURN:NBN:fi:aalto-202309135868
dc.language.isoenen
dc.publisherPMLR
dc.relation.ispartofInternational Conference on Machine Learningen
dc.relation.ispartofseriesProceedings of the 40th International Conference on Machine Learningen
dc.relation.ispartofseriesProceedings of Machine Learning Researchen
dc.relation.ispartofseriesVolume 202en
dc.rightsopenAccessen
dc.titleOptimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inferenceen
dc.typeConference article in proceedingsfi
dc.type.versionpublishedVersion
Files