FLAME: Taming Backdoors in Federated Learning
Loading...
Access rights
openAccess
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2022
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
18
1415-1432
1415-1432
Series
Proceedings of the 31st USENIX Security Symposium, Security 2022
Abstract
Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to so-called backdoor attacks, in which an adversary injects manipulated model updates into the federated model aggregation process so that the resulting model will provide targeted false predictions for specific adversary-chosen inputs. Proposed defenses against backdoor attacks based on detecting and filtering out malicious model updates consider only very specific and limited attacker models, whereas defenses based on differential privacy-inspired noise injection significantly deteriorate the benign performance of the aggregated model. To address these deficiencies, we introduce FLAME, a defense framework that estimates the sufficient amount of noise to be injected to ensure the elimination of backdoors. To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach. This ensures that FLAME can maintain the benign performance of the aggregated model while effectively eliminating adversarial backdoors. Our evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME removes backdoors effectively with a negligible impact on the benign performance of the models.Description
Funding Information: This research was funded by the Deutsche Forschungsgemeinschaft (DFG) SFB-1119 CROSSING/236615297, the European Research Council (ERC, grant No. 850990 PSOTI), the EU H2020 project SPATIAL (grant No. 101021808), GRK 2050 Privacy & Trust/251805230, HMWK within ATHENE project, NSF-TrustHub (grant No. 1649423), SRC-Auto (2019-AU-2899), Huawei OpenS3 Lab, and Intel Private AI Collaborative Research Center. We thank the anonymous reviewers and the shepherd, Neil Gong, for constructive reviews and comments. Publisher Copyright: © USENIX Security Symposium, Security 2022.All rights reserved.
Keywords
Other note
Citation
Nguyen, T D, Rieger, P, Chen, H, Yalame, H, Möllering, H, Fereidooni, H, Marchal, S, Miettinen, M, Mirhoseini, A, Zeitouni, S, Koushanfar, F, Sadeghi, A R & Schneider, T 2022, FLAME: Taming Backdoors in Federated Learning . in Proceedings of the 31st USENIX Security Symposium, Security 2022 . USENIX -The Advanced Computing Systems Association, pp. 1415-1432, USENIX Security Symposium, Boston, Massachusetts, United States, 10/08/2022 . < https://www.usenix.org/conference/usenixsecurity22/presentation/nguyen >