A Fixed-Point of View on Gradient Methods for Big Data
dc.contributor | Aalto-yliopisto | fi |
dc.contributor | Aalto University | en |
dc.contributor.author | Jung, Alexander | en_US |
dc.contributor.department | Department of Computer Science | en |
dc.contributor.groupauthor | Professorship Jung Alexander | en |
dc.date.accessioned | 2018-02-09T10:06:48Z | |
dc.date.available | 2018-02-09T10:06:48Z | |
dc.date.issued | 2017 | en_US |
dc.description.abstract | Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning for massive datasets (big data). In particular, stochastic gradient methods are considered the defacto standard for training deep neural networks. Studying gradient methods within the realm of fixed-point theory provides us with powerful tools to analyze their convergence properties. In particular, gradient methods using inexact or noisy gradients, such as stochastic gradient descent, can be studied conveniently using well-known results on inexact fixed-point iterations. Moreover, as we demonstrate in this paper, the fixed-point approach allows an elegant derivation of accelerations for basic gradient methods. In particular, we will show how gradient descent can be accelerated by a fixed-point preserving transformation of an operator associated with the objective function. | en |
dc.description.version | Peer reviewed | en |
dc.format.extent | 11 | |
dc.format.extent | 1-11 | |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Jung, A 2017, ' A Fixed-Point of View on Gradient Methods for Big Data ', Frontiers in Applied Mathematics and Statistics, vol. 3, pp. 1-11 . https://doi.org/10.3389/fams.2017.00018 | en |
dc.identifier.doi | 10.3389/fams.2017.00018 | en_US |
dc.identifier.issn | 2297-4687 | |
dc.identifier.other | PURE UUID: d9ff3a44-e7a8-4c26-858f-c3980b3198ea | en_US |
dc.identifier.other | PURE ITEMURL: https://research.aalto.fi/en/publications/d9ff3a44-e7a8-4c26-858f-c3980b3198ea | en_US |
dc.identifier.other | PURE LINK: https://www.frontiersin.org/article/10.3389/fams.2017.00018 | en_US |
dc.identifier.other | PURE FILEURL: https://research.aalto.fi/files/16832497/fams_03_00018.pdf | en_US |
dc.identifier.uri | https://aaltodoc.aalto.fi/handle/123456789/30001 | |
dc.identifier.urn | URN:NBN:fi:aalto-201802091498 | |
dc.language.iso | en | en |
dc.relation.ispartofseries | Frontiers in Applied Mathematics and Statistics | en |
dc.relation.ispartofseries | Volume 3 | en |
dc.rights | openAccess | en |
dc.subject.keyword | convex optimization | en_US |
dc.subject.keyword | fixed point theory | en_US |
dc.subject.keyword | big data | en_US |
dc.subject.keyword | machine learning | en_US |
dc.subject.keyword | contraction mapping | en_US |
dc.subject.keyword | gradient descent | en_US |
dc.subject.keyword | heavy balls | en_US |
dc.title | A Fixed-Point of View on Gradient Methods for Big Data | en |
dc.type | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä | fi |
dc.type.version | publishedVersion |