Semantic segmentation of point cloud data using raw laser scanner measurements and deep neural networks

dc.contributor.authorKaijaluoto Risto
dc.contributor.authorKukko Antero
dc.contributor.authorEl Issaoui Aimad
dc.contributor.authorHyyppä Juha
dc.contributor.authorKaartinen Harri
dc.contributor.organizationfi=maantiede|en=Geography |
dc.contributor.organization-code2606901
dc.converis.publication-id381254344
dc.converis.urlhttps://research.utu.fi/converis/portal/Publication/381254344
dc.date.accessioned2025-08-28T03:38:46Z
dc.date.available2025-08-28T03:38:46Z
dc.description.abstractDeep learning methods based on convolutional neural networks have shown to give excellent results in semantic segmentation of images, but the inherent irregularity of point cloud data complicates their usage in semantically segmenting 3D laser scanning data. To overcome this problem, point cloud networks particularly specialized for the purpose have been implemented since 2017 but finding the most appropriate way to semantically segment point clouds is still an open research question. In this study we attempted semantic segmentation of point cloud data with convolutional neural networks by using only the raw measurements provided by a multiple echo detection capable profiling laser scanner. We formatted the measurements to a series of 2D rasters, where each raster contains the measurements (range, reflectance, echo deviation) of a single scanner mirror rotation to be able to use the rich research done on semantic segmentation of 2D images with convolutional neural networks. Similar approach for profiling laser scanner in forest context has never been proposed before. A boreal forest in Evo region near Hämeenlinna in Finland was used as experimental study area. The data was collected with FGI Akhka-R3 backpack laser scanning system, georeferenced and then manually labelled to ground, understorey, tree trunk and foliage classes for training and evaluation purposes. The labelled points were then transformed back to 2D rasters and used for training three different neural network architectures. Further, the same georeferenced data in point cloud format was used for training the state-of-the-art point cloud semantic segmentation network RandLA-Net and the results were compared with those of our method. Our best semantic segmentation network reached the mean Intersection-over-Union value of 80.1% and it is comparable to the 80.6% reached by the point cloud -based RandLA-Net. The numerical results and visual analysis of the resulting point clouds show that our method is a valid way of doing semantic segmentation of point clouds at least in the forest context. The labelled datasets were also released to the research community.
dc.identifier.eissn2667-3932
dc.identifier.jour-issn2667-3932
dc.identifier.olddbid210949
dc.identifier.oldhandle10024/193976
dc.identifier.urihttps://www.utupub.fi/handle/11111/56710
dc.identifier.urlhttps://doi.org/10.1016/j.ophoto.2021.100011
dc.identifier.urnURN:NBN:fi-fe2025082790712
dc.language.isoen
dc.okm.affiliatedauthorKaartinen, Harri
dc.okm.discipline1171 Geosciencesen_GB
dc.okm.discipline1171 Geotieteetfi_FI
dc.okm.internationalcopublicationnot an international co-publication
dc.okm.internationalityInternational publication
dc.okm.typeA1 ScientificArticle
dc.publisherElsevier
dc.publisher.countryNetherlandsen_GB
dc.publisher.countryAlankomaatfi_FI
dc.publisher.country-codeNL
dc.relation.articlenumber100011
dc.relation.doi10.1016/j.ophoto.2021.100011
dc.relation.ispartofjournalISPRS Open Journal of Photogrammetry and Remote Sensing
dc.relation.volume3
dc.source.identifierhttps://www.utupub.fi/handle/10024/193976
dc.titleSemantic segmentation of point cloud data using raw laser scanner measurements and deep neural networks
dc.year.issued2022

Tiedostot

Näytetään 1 - 1 / 1
Ladataan...
Name:
1-s2.0-S2667393221000119-main.pdf
Size:
8.17 MB
Format:
Adobe Portable Document Format