Learning Centre

A Fixed-Point of View on Gradient Methods for Big Data

 |  Login

Show simple item record

dc.contributor Aalto-yliopisto fi
dc.contributor Aalto University en
dc.contributor.author Jung, Alexander
dc.date.accessioned 2018-02-09T10:06:48Z
dc.date.available 2018-02-09T10:06:48Z
dc.date.issued 2017
dc.identifier.citation Jung , A 2017 , ' A Fixed-Point of View on Gradient Methods for Big Data ' Frontiers in Applied Mathematics and Statistics , vol 3 , pp. 1-11 . DOI: 10.3389/fams.2017.00018 en
dc.identifier.issn 2297-4687
dc.identifier.other PURE UUID: d9ff3a44-e7a8-4c26-858f-c3980b3198ea
dc.identifier.other PURE ITEMURL: https://research.aalto.fi/en/publications/a-fixedpoint-of-view-on-gradient-methods-for-big-data(d9ff3a44-e7a8-4c26-858f-c3980b3198ea).html
dc.identifier.other PURE LINK: https://www.frontiersin.org/article/10.3389/fams.2017.00018
dc.identifier.other PURE FILEURL: https://research.aalto.fi/files/16832497/fams_03_00018.pdf
dc.identifier.uri https://aaltodoc.aalto.fi/handle/123456789/30001
dc.description.abstract Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning for massive datasets (big data). In particular, stochastic gradient methods are considered the defacto standard for training deep neural networks. Studying gradient methods within the realm of fixed-point theory provides us with powerful tools to analyze their convergence properties. In particular, gradient methods using inexact or noisy gradients, such as stochastic gradient descent, can be studied conveniently using well-known results on inexact fixed-point iterations. Moreover, as we demonstrate in this paper, the fixed-point approach allows an elegant derivation of accelerations for basic gradient methods. In particular, we will show how gradient descent can be accelerated by a fixed-point preserving transformation of an operator associated with the objective function. en
dc.format.extent 11
dc.format.extent 1-11
dc.format.mimetype application/pdf
dc.language.iso en en
dc.relation.ispartofseries Frontiers in Applied Mathematics and Statistics en
dc.relation.ispartofseries Volume 3 en
dc.rights openAccess en
dc.subject.other 113 Computer and information sciences en
dc.title A Fixed-Point of View on Gradient Methods for Big Data en
dc.type A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä fi
dc.description.version Peer reviewed en
dc.contributor.department Department of Computer Science
dc.subject.keyword convex optimization
dc.subject.keyword fixed point theory
dc.subject.keyword big data
dc.subject.keyword machine learning
dc.subject.keyword contraction mapping
dc.subject.keyword gradient descent
dc.subject.keyword heavy balls
dc.subject.keyword 113 Computer and information sciences
dc.identifier.urn URN:NBN:fi:aalto-201802091498
dc.identifier.doi 10.3389/fams.2017.00018
dc.type.version publishedVersion


Files in this item

Files Size Format View

There are no open access files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search archive


Advanced Search

article-iconSubmit a publication

Browse