Parallel Training of Neural Networks in 6G L1

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.advisorTuononen, Marko
dc.contributor.authorJiang, Guangkai
dc.contributor.schoolPerustieteiden korkeakoulufi
dc.contributor.supervisorJung, Alexander
dc.date.accessioned2023-08-27T17:11:59Z
dc.date.available2023-08-27T17:11:59Z
dc.date.issued2023-08-21
dc.description.abstractIntroduction and Background: This Master's thesis focuses on optimizing neural networks' training process for 6G communication systems with parallel training. The introductory section establishes neural networks' crucial role in 6G and introduces parallel training as a vital enhancement method. A subset of the background pertains to neural networks in 6G's L1 processing. Methods: Using CIFAR-10 and ImageNet datasets, the study employs ResNet models as proxies for 6G L1 models. The emphasis is on parallel training optimization, utilizing PyTorch's DistributedDataParallel (DDP) in non-distributed GPU servers and Kubernetes environments. Disk bandwidth, data loading, and GPU throughput are profiled to better utilize all available resources. Results and Discussion: The results reveal that parallel training with DDP significantly boosts neural networks' performance. The analysis includes insights into global batch size effects on performance and a Multi-Instance GPU (MIG) study. The discussion compares findings with existing literature and explores implications, limitations, and future research avenues. Conclusion and Future Directions: The thesis confirms that parallel training with DDP optimizes neural networks for 6G systems. Recommendations for future work encompass Kubernetes and KubeFlow integration, exploring alternative neural network architectures, and refining learning rate scaling in parallel training. The thesis also emphasizes environmental sustainability, advocating DDP and MIG technologies for resource-efficient AI development.en
dc.format.extent58
dc.format.mimetypeapplication/pdfen
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/122835
dc.identifier.urnURN:NBN:fi:aalto-202308275176
dc.language.isoenen
dc.programmeMaster’s Programme in Security and Cloud Computing (SECCLO)fi
dc.programme.majorSecurity and Cloud Computingfi
dc.programme.mcodeSCI3113fi
dc.subject.keywordNeural Networksen
dc.subject.keywordParallel Trainingen
dc.subject.keyword6G L1en
dc.subject.keywordDistributed Data Parallelen
dc.titleParallel Training of Neural Networks in 6G L1en
dc.typeG2 Pro gradu, diplomityöfi
dc.type.ontasotMaster's thesisen
dc.type.ontasotDiplomityöfi
local.aalto.electroniconlyyes
local.aalto.openaccessyes
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
master_Jiang_Guangkai_2023.pdf
Size:
3.9 MB
Format:
Adobe Portable Document Format