Parallel Training of Neural Networks in 6G L1

Loading...
Thumbnail Image
Journal Title
Journal ISSN
Volume Title
Perustieteiden korkeakoulu | Master's thesis
Date
2023-08-21
Department
Major/Subject
Security and Cloud Computing
Mcode
SCI3113
Degree programme
Master’s Programme in Security and Cloud Computing (SECCLO)
Language
en
Pages
58
Series
Abstract
Introduction and Background: This Master's thesis focuses on optimizing neural networks' training process for 6G communication systems with parallel training. The introductory section establishes neural networks' crucial role in 6G and introduces parallel training as a vital enhancement method. A subset of the background pertains to neural networks in 6G's L1 processing. Methods: Using CIFAR-10 and ImageNet datasets, the study employs ResNet models as proxies for 6G L1 models. The emphasis is on parallel training optimization, utilizing PyTorch's DistributedDataParallel (DDP) in non-distributed GPU servers and Kubernetes environments. Disk bandwidth, data loading, and GPU throughput are profiled to better utilize all available resources. Results and Discussion: The results reveal that parallel training with DDP significantly boosts neural networks' performance. The analysis includes insights into global batch size effects on performance and a Multi-Instance GPU (MIG) study. The discussion compares findings with existing literature and explores implications, limitations, and future research avenues. Conclusion and Future Directions: The thesis confirms that parallel training with DDP optimizes neural networks for 6G systems. Recommendations for future work encompass Kubernetes and KubeFlow integration, exploring alternative neural network architectures, and refining learning rate scaling in parallel training. The thesis also emphasizes environmental sustainability, advocating DDP and MIG technologies for resource-efficient AI development.
Description
Supervisor
Jung, Alexander
Thesis advisor
Tuononen, Marko
Keywords
Neural Networks, Parallel Training, 6G L1, Distributed Data Parallel
Other note
Citation