Learning Centre

Guarantees of Differential Privacy in Overparameterised Models

 |  Login

Show simple item record

dc.contributor Aalto-yliopisto fi
dc.contributor Aalto University en
dc.contributor.advisor Szyller, Sebastian
dc.contributor.author Micozzi, Eleonora
dc.date.accessioned 2021-08-29T17:09:32Z
dc.date.available 2021-08-29T17:09:32Z
dc.date.issued 2021-08-23
dc.identifier.uri https://aaltodoc.aalto.fi/handle/123456789/109320
dc.description.abstract Deep Learning (DL) has become increasingly popular in recent years. While DL models can achieve high levels of accuracy, due to their dimensionality they also tend to leak information about the data points in their training dataset. This leakage is mainly caused by overfitting, which is the tendency of Machine Learning (ML) models to behave differently on their training set with respect to their test set. Overfitted models are prone to privacy leaks because they do not generalize well, and they memorize information on their training data. Differential Privacy (DP) has been adopted as the de facto standard for privacy of data in ML. DP is normally applied to ML models through a process called Differentially Private Stochastic Gradient Descent (DP-SGD): it involves adding noise to the gradient update step, limiting the effect any data sample has on the model. Since DP protects data points by limiting their effect on the model, it is also considered a strong defence against Membership Inference Attacks (MIAs). MIAs are a type of attack against the privacy of ML models, that aim to infer whether a data point was part of the training set of a target model. This information is sensitive, and therefore needs to be protected. This thesis work explores the relationship between differential privacy and membership inference attacks, and the effect overfitting has on the privacy leakage. We test the effectiveness of DP as a defence against MIAs by analyzing and reproducing 3 state-of-the-art MIAs, and testing them on models trained with different privacy budgets, and without DP. Our results show that differential privacy is an effective defence against membership inference attacks, reducing their effectiveness significantly with respect to non-private models. en
dc.format.extent 59 (vi + 53)
dc.language.iso en en
dc.title Guarantees of Differential Privacy in Overparameterised Models en
dc.type G2 Pro gradu, diplomityö fi
dc.contributor.school Perustieteiden korkeakoulu fi
dc.subject.keyword differential privacy en
dc.subject.keyword membership inference en
dc.subject.keyword deel learning en
dc.subject.keyword security en
dc.identifier.urn URN:NBN:fi:aalto-202108298556
dc.programme.major Security and Cloud Computing fi
dc.programme.mcode SCI3084 fi
dc.type.ontasot Master's thesis en
dc.type.ontasot Diplomityö fi
dc.contributor.supervisor Asokan, N.
dc.programme Master’s Programme in Security and Cloud Computing (SECCLO) fi
local.aalto.electroniconly yes
local.aalto.openaccess no


Files in this item

Files Size Format View

There are no open access files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search archive


Advanced Search

article-iconSubmit a publication

Browse

Statistics