Guarantees of Differential Privacy in Overparameterised Models

No Thumbnail Available
Journal Title
Journal ISSN
Volume Title
Perustieteiden korkeakoulu | Master's thesis
Date
2021-08-23
Department
Major/Subject
Security and Cloud Computing
Mcode
SCI3084
Degree programme
Master’s Programme in Security and Cloud Computing (SECCLO)
Language
en
Pages
59 (vi + 53)
Series
Abstract
Deep Learning (DL) has become increasingly popular in recent years. While DL models can achieve high levels of accuracy, due to their dimensionality they also tend to leak information about the data points in their training dataset. This leakage is mainly caused by overfitting, which is the tendency of Machine Learning (ML) models to behave differently on their training set with respect to their test set. Overfitted models are prone to privacy leaks because they do not generalize well, and they memorize information on their training data. Differential Privacy (DP) has been adopted as the de facto standard for privacy of data in ML. DP is normally applied to ML models through a process called Differentially Private Stochastic Gradient Descent (DP-SGD): it involves adding noise to the gradient update step, limiting the effect any data sample has on the model. Since DP protects data points by limiting their effect on the model, it is also considered a strong defence against Membership Inference Attacks (MIAs). MIAs are a type of attack against the privacy of ML models, that aim to infer whether a data point was part of the training set of a target model. This information is sensitive, and therefore needs to be protected. This thesis work explores the relationship between differential privacy and membership inference attacks, and the effect overfitting has on the privacy leakage. We test the effectiveness of DP as a defence against MIAs by analyzing and reproducing 3 state-of-the-art MIAs, and testing them on models trained with different privacy budgets, and without DP. Our results show that differential privacy is an effective defence against membership inference attacks, reducing their effectiveness significantly with respect to non-private models.
Description
Supervisor
Asokan, N.
Thesis advisor
Szyller, Sebastian
Keywords
differential privacy, membership inference, deel learning, security
Other note
Citation