A Fixed-Point of View on Gradient Methods for Big Data

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorJung, Alexanderen_US
dc.contributor.departmentDepartment of Computer Scienceen
dc.contributor.groupauthorProfessorship Jung Alexanderen
dc.date.accessioned2018-02-09T10:06:48Z
dc.date.available2018-02-09T10:06:48Z
dc.date.issued2017en_US
dc.description.abstractInterpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning for massive datasets (big data). In particular, stochastic gradient methods are considered the defacto standard for training deep neural networks. Studying gradient methods within the realm of fixed-point theory provides us with powerful tools to analyze their convergence properties. In particular, gradient methods using inexact or noisy gradients, such as stochastic gradient descent, can be studied conveniently using well-known results on inexact fixed-point iterations. Moreover, as we demonstrate in this paper, the fixed-point approach allows an elegant derivation of accelerations for basic gradient methods. In particular, we will show how gradient descent can be accelerated by a fixed-point preserving transformation of an operator associated with the objective function.en
dc.description.versionPeer revieweden
dc.format.extent11
dc.format.extent1-11
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationJung, A 2017, ' A Fixed-Point of View on Gradient Methods for Big Data ', Frontiers in Applied Mathematics and Statistics, vol. 3, pp. 1-11 . https://doi.org/10.3389/fams.2017.00018en
dc.identifier.doi10.3389/fams.2017.00018en_US
dc.identifier.issn2297-4687
dc.identifier.otherPURE UUID: d9ff3a44-e7a8-4c26-858f-c3980b3198eaen_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/d9ff3a44-e7a8-4c26-858f-c3980b3198eaen_US
dc.identifier.otherPURE LINK: https://www.frontiersin.org/article/10.3389/fams.2017.00018en_US
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/16832497/fams_03_00018.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/30001
dc.identifier.urnURN:NBN:fi:aalto-201802091498
dc.language.isoenen
dc.relation.ispartofseriesFrontiers in Applied Mathematics and Statisticsen
dc.relation.ispartofseriesVolume 3en
dc.rightsopenAccessen
dc.subject.keywordconvex optimizationen_US
dc.subject.keywordfixed point theoryen_US
dc.subject.keywordbig dataen_US
dc.subject.keywordmachine learningen_US
dc.subject.keywordcontraction mappingen_US
dc.subject.keywordgradient descenten_US
dc.subject.keywordheavy ballsen_US
dc.titleA Fixed-Point of View on Gradient Methods for Big Dataen
dc.typeA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessäfi
dc.type.versionpublishedVersion

Files