Browsing by Author "Nguyen, Thien Duc"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
- AuDI: Towards autonomous IoT device-type identification using periodic communications
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2019-06-01) Marchal, Samuel; Miettinen, Markus; Nguyen, Thien Duc; Sadeghi, Ahmad-Reza; Asokan, N.IoT devices are being widely deployed. But the huge variance among them in the level of security and requirements for network resources makes it unfeasible to manage IoT networks using a common generic policy. One solution to this challenge is to define policies for classes of devices based on device type. In this paper, we present AuDI, a system for quickly and effectively identifying the type of a device in an IoT network by analyzing their network communications. AuDI models the periodic communication traffic of IoT devices using an unsupervised learning method to perform identification. In contrast to prior work, AuDI operates autonomously after initial setup, learning, without human intervention nor labeled data, to identify previously unseen device types. AuDI can identify the type of a device in any mode of operation or stage of lifecycle of the device. Via systematic experiments using 33 off-the-shelf IoT devices, we show that AuDI is effective (98.2% accuracy). - DoubleEcho: Mitigating context-manipulation attacks in copresence verification
A4 Artikkeli konferenssijulkaisussa(2019-03-01) Thu Truong, Hien Thi; Toivonen, Juhani; Nguyen, Thien Duc; Soriente, Claudio; Tarkoma, Sasu; Asokan, N.Copresence verification based on context can improve usability and strengthen security of many authentication and access control systems. By sensing and comparing their surroundings, two or more devices can tell whether they are copresent and use this information to make access control decisions. To the best of our knowledge, all context-based copresence verification mechanisms to date are susceptible to context-manipulation attacks. In such attacks, a distributed adversary replicates the same context at the (different) locations of the victim devices, and induces them to believe that they are copresent. In this paper we propose DoubleEcho, a context-based copresence verification technique that leverages acoustic Room Impulse Response (RIR) to mitigate context-manipulation attacks. In DoubleEcho, one device emits a wide-band audible chirp and all participating devices record reflections of the chirp from the surrounding environment. Since RIR is, by its very nature, dependent on the physical surroundings, it constitutes a unique location signature that is hard for an adversary to replicate. We evaluate DoubleEcho by collecting RIR data with various mobile devices and in a range of different locations. We show that DoubleEcho mitigates context-manipulation attacks whereas all other approaches to date are entirely vulnerable to such attacks. DoubleEcho detects copresence (or lack thereof) in roughly 2 seconds and works on commodity devices. - FLAME: Taming Backdoors in Federated Learning
A4 Artikkeli konferenssijulkaisussa(2022) Nguyen, Thien Duc; Rieger, Phillip; Chen, Huili; Yalame, Hossein; Möllering, Helen; Fereidooni, Hossein; Marchal, Samuel; Miettinen, Markus; Mirhoseini, Azalia; Zeitouni, Shaza; Koushanfar, Farinaz; Sadeghi, Ahmad Reza; Schneider, ThomasFederated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to so-called backdoor attacks, in which an adversary injects manipulated model updates into the federated model aggregation process so that the resulting model will provide targeted false predictions for specific adversary-chosen inputs. Proposed defenses against backdoor attacks based on detecting and filtering out malicious model updates consider only very specific and limited attacker models, whereas defenses based on differential privacy-inspired noise injection significantly deteriorate the benign performance of the aggregated model. To address these deficiencies, we introduce FLAME, a defense framework that estimates the sufficient amount of noise to be injected to ensure the elimination of backdoors. To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach. This ensures that FLAME can maintain the benign performance of the aggregated model while effectively eliminating adversarial backdoors. Our evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME removes backdoors effectively with a negligible impact on the benign performance of the models. - Revisiting context-based authentication in IoT
A4 Artikkeli konferenssijulkaisussa(2018-06-24) Miettinen, Markus; Nguyen, Thien Duc; Sadeghi, Ahmad Reza; Asokan, N.The emergence of IoT poses new challenges towards solutions for authenticating numerous very heterogeneous IoT devices to their respective trust domains. Using passwords or pre-defined keys have drawbacks that limit their use in IoT scenarios. Recent works propose to use contextual information about ambient physical properties of devices' surroundings as a shared secret to mutually authenticate devices that are co-located, e.g., the same room. In this paper, we analyze these context-based authentication solutions with regard to their security and requirements on context quality. We quantify their achievable security based on empirical real-world data from context measurements in typical IoT environments. - SAFELearn: Secure Aggregation for private FEderated Learning
A4 Artikkeli konferenssijulkaisussa(2021-05) Fereidooni, Hossein; Marchal, Samuel; Miettinen, Markus; Mirhoseini, Azalia; Mollering, Helen; Nguyen, Thien Duc; Rieger, Phillip; Sadeghi, Ahmad Reza; Schneider, Thomas; Yalame, Hossein; Zeitouni, ShazaFederated learning (FL) is an emerging distributed machine learning paradigm which addresses critical data privacy issues in machine learning by enabling clients, using an aggregation server (aggregator), to jointly train a global model without revealing their training data thereby, it improves not only privacy but is also efficient as it uses the computation power and data of potentially millions of clients for training in parallel. However, FL is vulnerable to so-called inference attacks by malicious aggregators which can infer information about clients' data from their model updates. Secure aggregation restricts the central aggregator to only learn the summation or average of the updates of clients. Unfortunately, existing protocols for secure aggregation for FL suffer from high communication, computation, and many communication rounds.In this work, we present SAFELearn, a generic design for efficient private FL systems that protects against inference attacks that have to analyze individual clients' model updates using secure aggregation. It is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. In contrast to previous works, we only need 2 rounds of communication in each training iteration, do not use any expensive cryptographic primitives on clients, tolerate dropouts, and do not rely on a trusted third party. We implement and benchmark an instantiation of our generic design with secure two-party computation. Our implementation aggregates 500 models with more than 300K parameters in less than 0.5 seconds.