Distributed Assignment with Load Balancing for DNN Inference at the Edge
Access rights
openAccess
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2023-01-15
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
13
Series
IEEE Internet of Things Journal, articlenumber 9882293
Abstract
Inference carried out on pre-trained deep neural networks (DNNs) is particularly effective as it does not require re-training and entails no loss in accuracy. Unfortunately, resource-constrained devices such as those in the Internet of Things may need to offload the related computation to more powerful servers, particularly, at the network edge. However, edge servers have limited resources compared to those in the cloud; therefore, inference offloading generally requires dividing the original DNN into different pieces that are then assigned to multiple edge servers. Related approaches in the state of the art either make strong assumptions on the system model or fail to provide strict performance guarantees. This article specifically addresses these limitations by applying distributed assignment to deep neural network inference at the edge. In particular, it devises a detailed model of DNN-based inference, suitable for realistic scenarios involving edge computing. Optimal inference offloading with load balancing is also defined as a multiple assignment problem that maximizes proportional fairness. Moreover, a distributed algorithm for DNN inference offloading is introduced to solve such a problem in polynomial time with strong optimality guarantees. Finally, extensive simulations employing different datasets and DNN architectures establish that the proposed solution significantly improves upon the state of the art in terms of inference time (1.14 to 2.62 times faster), load balance (with a Jain’s fairness index of 0.9), and convergence (one order of magnitude less iterations).Description
Keywords
Servers, Task analysis, Computational modeling, Internet of Things, Training, Edge computing, Computer architecture
Other note
Citation
Xu, Y, Mohammed, T, Francesco, M D & Fischione, C 2023, ' Distributed Assignment with Load Balancing for DNN Inference at the Edge ', IEEE Internet of Things Journal, vol. 10, no. 2, 9882293, pp. 1053-1065 . https://doi.org/10.1109/JIOT.2022.3205410