Learning with Vertically-Partitioned Data, Binary Feedback, and Random Parameter Update
Loading...
Access rights
openAccess
acceptedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Authors
Date
Major/Subject
Mcode
Degree programme
Language
en
Pages
6
Series
INFOCOM 2019 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2019, pp. 578-583, IEEE Conference on Computer Communications
Abstract
Machine learning models can deal with data samples scattered among distributed agents, each of which holds a nonoverlapping set of sample features. In this paper, we propose a training algorithm that does not require communication between these agents. A coordinator can access ground-truth labels and produces binary feedback to guide the optimization process towards optimal model parameters. We mimic the gradient descent technique with information observed locally at each agent. We experimented with the logistic regression model on multiple benchmark datasets and achieves promising results in terms of convergence rate and communication load.Description
Other note
Citation
Nguyen, N & Sigg, S 2019, Learning with Vertically-Partitioned Data, Binary Feedback, and Random Parameter Update. in INFOCOM 2019 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2019., 8845203, IEEE Conference on Computer Communications, IEEE, pp. 578-583, IEEE Conference on Computer Communications, Paris, France, 29/04/2019. https://doi.org/10.1109/INFCOMW.2019.8845203