Privacy Preserving Deep Neural Network Prediction using Trusted Hardware

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

Perustieteiden korkeakoulu | Master's thesis

Date

2018-11-07

Department

Major/Subject

Mobile Computing, Services and Security

Mcode

SCI3045

Degree programme

Master’s Programme in Computer, Communication and Information Sciences

Language

en

Pages

68 + 5

Series

Abstract

In recent years machine learning has gained a lot of attention not only in the scientific community but also in user-facing applications. Today, many applications utilise machine learning to take advantage of its capabilities. With such applications, users actively or passively input data that is used by state-of-the-art algorithms to generate accurate predictions. Due to the extensive work necessary to fine-tune these algorithms for a specific task, they are predominantly executed in the cloud where they can be protected from competitors or malicious users. As a result, users' privacy might be at risk as their data is sent to and processed by remote cloud services. Depending on the application, users might expose highly sensitive data, meaning a malicious provider could harvest extensive amounts of personal data from its users. In order to protect user privacy without compromising the confidentiality guarantees of traditional solutions, we propose using trusted hardware for privacy preserving deep neural network predictions. Our solution consists of a hardware-backed prediction service and a client device that connects to said service. All machine learning computations executed by the prediction service that depend on input data are protected by a trusted hardware component, called a Trusted Execution Environment. This can be verified by users via remote attestation to ensure their data remains protected. In addition, we have built a proof-of-concept implementation of our solution using Intel Software Guard Extensions (SGX). Compared to existing solutions relying on homomorphic encryption, our proof-of-concept implementation vastly increases the set of supported machine learning algorithms. Moreover, our implementation is tightly integrated into the existing pipeline of machine learning tools by supporting the Open Neural Network Exchange (ONNX) Format. Furthermore, we focus on minimising our Trusted Computing Base (TCB), thus our proof-of-concept implementation only consists of 4,500 lines of code. Additionally, we achieve a 7x increase in throughput whilst decreasing the latency 40x compared to prior work. In our tests, SGX reduced throughput by 11% and increased latency by 21% compared to our baseline implementation without SGX.

Description

Supervisor

Asokan, N

Thesis advisor

Paverd, Andrew
Marchal, Samuel

Keywords

machine learning, platform security, privacy, trusted hardware

Other note

Citation