Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks

Loading...
Thumbnail Image

Access rights

openAccess

URL

Journal Title

Journal ISSN

Volume Title

A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Date

2024-04

Major/Subject

Mcode

Degree programme

Language

en

Pages

17

Series

ISPRS Open Journal of Photogrammetry and Remote Sensing, Volume 12, pp. 1-17

Abstract

Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.

Description

Publisher Copyright: © 2024 The Authors

Keywords

Convolutional neural network, Deep learning, Mobile laser scanning, Multispectral point cloud, Real-time, Semantic segmentation

Other note

Citation

Reichler, M, Taher, J, Manninen, P, Kaartinen, H, Hyyppä, J & Kukko, A 2024, ' Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks ', ISPRS Open Journal of Photogrammetry and Remote Sensing, vol. 12, 100061, pp. 1-17 . https://doi.org/10.1016/j.ophoto.2024.100061