Learning Centre

Privacy Preserving Deep Neural Network Prediction using Trusted Hardware

 |  Login

Show simple item record

dc.contributor Aalto-yliopisto fi
dc.contributor Aalto University en
dc.contributor.advisor Paverd, Andrew
dc.contributor.advisor Marchal, Samuel
dc.contributor.author Reuter, Max
dc.date.accessioned 2018-11-13T13:33:16Z
dc.date.available 2018-11-13T13:33:16Z
dc.date.issued 2018-11-07
dc.identifier.uri https://aaltodoc.aalto.fi/handle/123456789/34699
dc.description.abstract In recent years machine learning has gained a lot of attention not only in the scientific community but also in user-facing applications. Today, many applications utilise machine learning to take advantage of its capabilities. With such applications, users actively or passively input data that is used by state-of-the-art algorithms to generate accurate predictions. Due to the extensive work necessary to fine-tune these algorithms for a specific task, they are predominantly executed in the cloud where they can be protected from competitors or malicious users. As a result, users' privacy might be at risk as their data is sent to and processed by remote cloud services. Depending on the application, users might expose highly sensitive data, meaning a malicious provider could harvest extensive amounts of personal data from its users. In order to protect user privacy without compromising the confidentiality guarantees of traditional solutions, we propose using trusted hardware for privacy preserving deep neural network predictions. Our solution consists of a hardware-backed prediction service and a client device that connects to said service. All machine learning computations executed by the prediction service that depend on input data are protected by a trusted hardware component, called a Trusted Execution Environment. This can be verified by users via remote attestation to ensure their data remains protected. In addition, we have built a proof-of-concept implementation of our solution using Intel Software Guard Extensions (SGX). Compared to existing solutions relying on homomorphic encryption, our proof-of-concept implementation vastly increases the set of supported machine learning algorithms. Moreover, our implementation is tightly integrated into the existing pipeline of machine learning tools by supporting the Open Neural Network Exchange (ONNX) Format. Furthermore, we focus on minimising our Trusted Computing Base (TCB), thus our proof-of-concept implementation only consists of 4,500 lines of code. Additionally, we achieve a 7x increase in throughput whilst decreasing the latency 40x compared to prior work. In our tests, SGX reduced throughput by 11% and increased latency by 21% compared to our baseline implementation without SGX. en
dc.format.extent 68 + 5
dc.format.mimetype application/pdf en
dc.language.iso en en
dc.title Privacy Preserving Deep Neural Network Prediction using Trusted Hardware en
dc.type G2 Pro gradu, diplomityö fi
dc.contributor.school Perustieteiden korkeakoulu fi
dc.subject.keyword machine learning en
dc.subject.keyword platform security en
dc.subject.keyword privacy en
dc.subject.keyword trusted hardware en
dc.identifier.urn URN:NBN:fi:aalto-201811135736
dc.programme.major Mobile Computing, Services and Security fi
dc.programme.mcode SCI3045 fi
dc.type.ontasot Master's thesis en
dc.type.ontasot Diplomityö fi
dc.contributor.supervisor Asokan, N
dc.programme Master’s Programme in Computer, Communication and Information Sciences fi
local.aalto.electroniconly yes
local.aalto.openaccess yes

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search archive

Advanced Search

article-iconSubmit a publication