Filtering Diverse Moving Objects For Enhanced 3D Lidar-Based SLAM.

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

School of Electrical Engineering | Master's thesis

Department

Mcode

Language

en

Pages

49

Series

Abstract

Three-dimensional (3D) environmental models, in the form of 3D maps, are essential for mobile robots to perform tasks such as localization and navigation. LiDAR sensors are commonly used for 3D mapping due to their active light source, which provides high-precision, dense data regardless of lighting conditions or motion. The full environment map is generated by successive accumulations of LiDAR scan data at each time step, where each scan is represented as a 3D point cloud. The successive accumulation of scan data requires pose transformation to a global common frame, that could be obtained using SLAM. The problem of SLAM is simpler under the assumption of a static environment. However, LiDAR data unavoidably captures moving objects such as vehicles, pedestrians, and other dynamic entities. These moving objects pose challenges to scan matching accuracy and may introduce artifacts into the map, appearing as false objects. Consequently, detecting and removing dynamic objects is essential to ensure a clean and consistent map representation of the robot’s operational environment. Most existing methods for dynamic object detection operate offline as post-processing steps. These approaches depend on accurate pose information but do not enhance pose estimation during the mapping process. On the other hand, emerging research on online dynamic object detection primarily employs learning-based methods. These methods face several challenges: they require extensive labeling, suffer from limited generalizability due to the lack of diverse, high-quality training datasets, and demand significant energy resources for training. To tackle this issue, this work presents an online non-learning approach that can detect moving objects regardless of their shape or category, without the need for costly training. The method is based on the observation that most moving objects are in contact with the ground. By comparing the ground position in previous scans with the current scan, moving objects can be detected. An object is considered to be moving if the ground beneath it was previously observed as vacant before the object occupied the space. This can be detected online by analyzing the height difference between previously accumulated scans and the current scan. This work is experimentally validated on the SemanticKITTI dataset, demonstrating strong potential for the online removal of dynamic objects during mapping. The method achieves high preservation and removal rates, yielding promising performance compared to state-of-the-art techniques.

Description

Supervisor

Kucner, Tomasz

Thesis advisor

Ahtiainen, Juhana

Other note

Citation