Machine Learning Approach for Generating Realistic 3D Point Clouds for Automotive Simulators
No Thumbnail Available
URL
Journal Title
Journal ISSN
Volume Title
School of Electrical Engineering |
Master's thesis
Authors
Date
2024-12-28
Department
Major/Subject
Autonomous Systems
Mcode
Degree programme
Master's Programme in ICT Innovation
Language
en
Pages
67
Series
Abstract
This thesis aims to demonstrate the use of Machine Learning (ML) models to generate sensor data specifically vision based-cameras for use in Automotive Simulators. The feed from the camera systems is used to generate 3D point clouds of the driving environment. In this thesis, the camera feed is never directly used but instead operates on the point clouds that are generated from the camera feeds which are preprocessed. Current Automotive Simulators typically employ physics-based rendering to generate point clouds that represent obstacles. However, these simulations often fail to accurately reflect real-world conditions. Incorporating Machine Learning into the generation process aims to improve the reliability of the simulator and to achieve more realistic sensor data comparable to real-world driving. The primary objective of this thesis is to evaluate and compare various ML algorithms that generate realistic point clouds from 3D bounding boxes representing road obstacles, such as vehicles. This research explores multiple custom ML models, including Fully Connected Networks (FC), Variational Auto Encoders (VAE), and Generative Adversarial Networks (GAN), as well as models that utilize 2D projected grid inputs, such as U-Net and LMNet. Using Chamfer Distance (CD) and Earth Movers Distance as evaluation metrics indicates that a Conditional Generative Adversarial network CGAN style network with an additional reconstruction loss outperforms all other networks in generating 3D point clouds.Description
Supervisor
Zhou, QuanThesis advisor
Gaim, WolfgainPuthanpura, Jithinlal
Keywords
PointClouds, 3D bounding box, GAN, 3D reconstruction, range image, VAE