Advances in Extreme Learning Machines

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.advisorMiche, Yoan, Dr., Aalto University, Department of Information and Computer Science, Finland
dc.contributor.authorvan Heeswijk, Mark
dc.contributor.departmentTietojenkäsittelytieteen laitosfi
dc.contributor.departmentDepartment of Information and Computer Scienceen
dc.contributor.labEnvironmental and Industrial Machine Learning Groupen
dc.contributor.labYmpäristön ja teollisuuden alojen koneoppiminenfi
dc.contributor.schoolPerustieteiden korkeakoulufi
dc.contributor.schoolSchool of Scienceen
dc.contributor.supervisorOja, Erkki, Aalto Distinguished Prof., Aalto University, Department of Information and Computer Science, Finland
dc.date.accessioned2015-04-08T09:00:31Z
dc.date.available2015-04-08T09:00:31Z
dc.date.dateaccepted2015-03-09
dc.date.defence2015-04-17
dc.date.issued2015
dc.description.abstractNowadays, due to advances in technology, data is generated at an incredible pace, resulting in large data sets of ever-increasing size and dimensionality. Therefore, it is important to have efficient computational methods and machine learning algorithms that can handle such large data sets, such that they may be analyzed in reasonable time. One particular approach that has gained popularity in recent years is the Extreme Learning Machine (ELM), which is the name given to neural networks that employ randomization in their hidden layer, and that can be trained efficiently. This dissertation introduces several machine learning methods based on Extreme Learning Machines (ELMs) aimed at dealing with the challenges that modern data sets pose. The contributions follow three main directions.    Firstly, ensemble approaches based on ELM are developed, which adapt to context and can scale to large data. Due to their stochastic nature, different ELMs tend to make different mistakes when modeling data. This independence of their errors makes them good candidates for combining them in an ensemble model, which averages out these errors and results in a more accurate model. Adaptivity to a changing environment is introduced by adapting the linear combination of the models based on accuracy of the individual models over time. Scalability is achieved by exploiting the modularity of the ensemble model, and evaluating the models in parallel on multiple processor cores and graphics processor units. Secondly, the dissertation develops variable selection approaches based on ELM and Delta Test, that result in more accurate and efficient models. Scalability of variable selection using Delta Test is again achieved by accelerating it on GPU. Furthermore, a new variable selection method based on ELM is introduced, and shown to be a competitive alternative to other variable selection methods. Besides explicit variable selection methods, also a new weight scheme based on binary/ternary weights is developed for ELM. This weight scheme is shown to perform implicit variable selection, and results in increased robustness and accuracy at no increase in computational cost. Finally, the dissertation develops training algorithms for ELM that allow for a flexible trade-off between accuracy and computational time. The Compressive ELM is introduced, which allows for training the ELM in a reduced feature space. By selecting the dimension of the feature space, the practitioner can trade off accuracy for speed as required.    Overall, the resulting collection of proposed methods provides an efficient, accurate and flexible framework for solving large-scale supervised learning problems. The proposed methods are not limited to the particular types of ELMs and contexts in which they have been tested, and can easily be incorporated in new contexts and models.en
dc.format.extent108 + app. 84
dc.format.mimetypeapplication/pdfen
dc.identifier.isbn978-952-60-6149-8 (electronic)
dc.identifier.isbn978-952-60-6148-1 (printed)
dc.identifier.issn1799-4942 (electronic)
dc.identifier.issn1799-4934 (printed)
dc.identifier.issn1799-4934 (ISSN-L)
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/15585
dc.identifier.urnURN:ISBN:978-952-60-6149-8
dc.language.isoenen
dc.opnWunsch, Donald C., Prof., Missouri University of Science & Technology, United States
dc.publisherAalto Universityen
dc.publisherAalto-yliopistofi
dc.relation.haspart[Publication 1]: Mark van Heeswijk, Yoan Miche, Tiina Lindh-Knuutila, Peter A.J. Hilbers, Timo Honkela, Erkki Oja, and Amaury Lendasse. Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction. In LNCS 5769 - Artificial Neural Networks, ICANN’09: International Conference on Artificial Neural Networks, pp. 305-314, September 2009. doi:10.1007/978-3-642-04277-5_31.
dc.relation.haspart[Publication 2]: Mark van Heeswijk, Yoan Miche, Erkki Oja, and Amaury Lendasse. GPU-accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing, 74 (16): pp. 2430-2437, September 2011. doi:10.1016/j.neucom.2010.11.034.
dc.relation.haspart[Publication 3]: Benoît Frenay, Mark van Heeswijk, Yoan Miche, Michel Verleysen, and Amaury Lendasse. Feature selection for nonlinear models with extreme learning machines. Neurocomputing, 102, pp. 111-124, February 2013. doi:10.1016/j.neucom.2011.12.055.
dc.relation.haspart[Publication 4]: Alberto Guillén, Maribel García Arenas, Mark van Heeswijk, Dušan Sovilj, Amaury Lendasse, Luis Herrera, Hector Pomares and Ignacio Rojas. Fast Feature Selection in a GPU Cluster Using the Delta Test. Entropy, 16 (2): pp. 854-869, 2014. doi:10.3390/e16020854.
dc.relation.haspart[Publication 5]: Mark van Heeswijk, and Yoan Miche. Binary/Ternary Extreme Learning Machines. Neurocomputing, 149, pp. 187-197, February 2015. doi:10.1016/j.neucom.2014.01.072.
dc.relation.haspart[Publication 6]: Mark van Heeswijk, Amaury Lendasse, and Yoan Miche. Compressive ELM: Improved Models Through Exploiting Time-Accuracy Trade-offs. In CCIS 459 - Engineering Applications of Neural Networks, pp. 165-174, 2014. doi:10.1007/978-3-319-11071-4_16.
dc.relation.ispartofseriesAalto University publication series DOCTORAL DISSERTATIONSen
dc.relation.ispartofseries43/2015
dc.revHuang, Guang-Bin, Prof., Nanyang Technological University, Singapore
dc.revTapson, Jonathan, Prof., University of Western Sydney, Australia
dc.subject.keywordExtreme Learning Machine (ELM)en
dc.subject.keywordhigh-performance computingen
dc.subject.keywordensemble modelsen
dc.subject.keywordvariable selectionen
dc.subject.keywordrandom projectionen
dc.subject.keywordmachine learningen
dc.subject.otherComputer scienceen
dc.titleAdvances in Extreme Learning Machinesen
dc.typeG5 Artikkeliväitöskirjafi
dc.type.dcmitypetexten
dc.type.ontasotDoctoral dissertation (article-based)en
dc.type.ontasotVäitöskirja (artikkeli)fi
local.aalto.archiveyes
local.aalto.digiauthask
local.aalto.digifolderAalto_64565
local.aalto.formfolder2015_04_08_klo_11_35
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
isbn9789526061498.pdf
Size:
2.56 MB
Format:
Adobe Portable Document Format