Differentially private source-free domain adaptation

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

School of Science | Master's thesis

Department

Mcode

Language

en

Pages

46

Series

Abstract

Unsupervised Domain Adaptation (UDA) refers to a transfer learning paradigm where the objective is to leverage knowledge from a labeled source domain for enabling effective training in an unlabeled target domain. UDA requires concurrent access to both domains, which in many scenarios raises privacy concerns. To address this, a more restrictive setting — Source-Free Domain Adaptation (SFDA) — has been proposed, in which a model is initially trained exclusively on the source domain, with only its parameters being subsequently shared with the target client for adaptation. While SFDA enhances privacy compared to traditional UDA, sharing the source model still presents the risk of leaking sensitive information from source data. We first demonstrate that SFDA remains vulnerable to privacy leakage through attacks such as Membership Inference Attacks (MIA). To mitigate this, we propose a fully private-to-private SFDA framework, in which the training on both source and target data is conducted under differential privacy. Our method builds upon SHOT, a popular SFDA model, by modifying its architecture and introducing DP-SGD into its training process. Additionally, we also propose a variant of this method which incorporates few-shot learning to improve the reliability of pseudo-labeling under privacy constraints.

Description

Supervisor

Kaski, Samuel

Thesis advisor

Yang, Yaohong

Other note

Citation