Unsupervised deep learning for semantic segmentation of multispectral LiDAR forest point clouds
Loading...
Access rights
openAccess
CC BY
CC BY
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Date
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
29
Series
ISPRS Journal of Photogrammetry and Remote Sensing, Volume 228, pp. 694-722
Abstract
Point clouds captured with laser scanning systems from forest environments can be utilized in a wide variety of applications within forestry and plant ecology, such as the estimation of tree stem attributes, leaf angle distribution, and above-ground biomass. However, effectively utilizing the data in such tasks requires the semantic segmentation of the data into wood and foliage points, also known as leaf–wood separation. The traditional approach to leaf–wood separation has been geometry- and radiometry-based unsupervised algorithms, which tend to perform poorly on data captured with airborne laser scanning (ALS) systems, even with a high point density (>1,000 points/m2). While recent machine and deep learning approaches achieve great results even on sparse point clouds, they require manually labeled training data, which is often extremely laborious to produce. Multispectral (MS) information has been demonstrated to have potential for improving the accuracy of leaf–wood separation, but quantitative assessment of its effects has been lacking. This study proposes a fully unsupervised deep learning method, GrowSP-ForMS, which is specifically designed for leaf–wood separation of high-density MS ALS point clouds (acquired with wavelengths 532, 905, and 1550 nm) and based on the GrowSP architecture. GrowSP-ForMS achieved a mean accuracy of 84.3% and a mean intersection over union (mIoU) of 69.6% on our MS test set, outperforming the unsupervised reference methods by a significant margin. When compared to supervised deep learning methods, our model performed similarly to the slightly older PointNet architecture but was outclassed by more recent approaches. Finally, two ablation studies were conducted, which demonstrated that our proposed changes increased the test set mIoU of GrowSP-ForMS by 29.4 percentage points (pp) in comparison to the original GrowSP model, and that utilizing MS data improved the mIoU by 5.6 pp from the monospectral case. For reproducibility, we release the GrowSP-ForMS source code and pretrained weights (https://github.com/ruoppa/GrowSP-ForMS), along with the multispectral data set (https://zenodo.org/records/15913427).Description
Publisher Copyright: © 2025 The Authors
Other note
Citation
Ruoppa, L, Oinonen, O, Taher, J, Lehtomäki, M, Takhtkeshha, N, Kukko, A, Kaartinen, H & Hyyppä, J 2025, 'Unsupervised deep learning for semantic segmentation of multispectral LiDAR forest point clouds', ISPRS Journal of Photogrammetry and Remote Sensing, vol. 228, pp. 694-722. https://doi.org/10.1016/j.isprsjprs.2025.07.038