FedSock & FedBlue: Scalable decentralized federated learning systems with generalized total variation minimization (GTV-Min)

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

School of Electrical Engineering | Master's thesis

Department

Mcode

Language

en

Pages

56

Series

Abstract

Decentralized Federated Learning (DFL) has manifested as a prominent server-free paradigm in recent years for secure and collaborative model training across entirely decentralized edge clients. In DFL, the model aggregation process with Generalized Total Variation Minimization (GTV-Min) runs under the premise of synchronous message communication among client nodes. However, the real-time message transmission process is susceptible to network interruptions, often plagued by message asynchrony and loss. This thesis presents the design, implementation, and evaluation of two robust DFL systems with GTV-Min, FedSock and FedBlue, considering asynchronous network communication with concurrency. FedSock was implemented with the socket API to support TCP/IP-based communication and tested across eight nodes, operating on an Independent and Identically Distributed (IID) dataset. In contrast, FedBlue was developed and deployed as an Android application utilizing Bluetooth Low Energy (BLE) within two nodes, leveraging a non-IID dataset. Both DFL systems carried out linear regression tasks across a range of GTV-Min regularization parameters (α ∈ [0, 2]) and proved resilience alongside scalability in operational testing under network interruptions. For FedSock-running devices, performance analysis over varying regularization strengths showed node-specific different convergence behaviors, where trivial variations in the IID dataset subsets induced significant changes in optimal regularizations among nodes for training and validation. Conversely, FedBlue demonstrated the expected performance differences with diametrically opposite trends for two nodes (Device 1 and Device 2) across α -range, led by the non-IID dataset. GTV-Min ensured effective regularization for FedBlue Device 1 with an optimal point at α = 0.25, balancing the bias-variance trade-off. Meanwhile, FedBlue Device 2 found GTV-Min regularization as a constraint that deteriorated the non-regularized baseline (α = 0) performance. Overall, the experimental results of FedSock and FedBlue highlight the intrinsic performance heterogeneity across DFL client nodes irrespective of data distribution.

Description

Supervisor

Jung, Alex

Other note

Citation