Evaluating Deep Learning RGB-Based Panoptic Segmentation Models on LiDAR-Generated Images

dc.contributor.authorAdal, Sileshi
dc.contributor.departmentfi=Tietotekniikan laitos|en=Department of Computing|
dc.contributor.facultyfi=Teknillinen tiedekunta|en=Faculty of Technology|
dc.contributor.studysubjectfi=Tieto- ja viestintätekniikka|en=Information and Communication Technology|
dc.date.accessioned2025-12-23T22:04:49Z
dc.date.available2025-12-23T22:04:49Z
dc.date.issued2025-12-12
dc.description.abstractPanoptic segmentation, which combines semantic and instance segmentation, plays a vital role in scene understanding for applications such as autonomous driving, robotics, and urban mapping. While state-of-the-art deep learning models have achieved strong performance on RGB datasets, their generalizability to LiDARgenerated imagery remains underexplored. This thesis investigates how existing RGB-trained panoptic segmentation models perform on LiDAR derived pseudo-RGB images. It begins with a structured review of leading architectures, training strategies, and benchmark results on RGB datasets. The selected models are then evaluated on LiDAR-generated data using metrics such as Panoptic Quality (PQ), Segmentation Quality (SQ), Recognition Quality (RQ), Intersection over Union (IoU), and inference efficiency, complemented by qualitative visualizations of the output masks. A pseudo-RGB LiDAR dataset was used to simulate cross modal testing conditions and to assess model robustness when applied to LiDAR data, which differs significantly from the RGB domain they were trained on. The results reveal that RGB trained panoptic segmentation models face notable performance degradation when applied to LiDAR generated imagery, primarily due to this domain difference and the lack of sensor specific adaptation. Differences in instance recognition, boundary accuracy, and category consistency were observed across models, as reflected in PQ, SQ, RQ, and IoU scores, as well as through qualitative outputs. These findings offer a foundational reference for future research and aim to contribute to the development of more versatile and effective deep learning models for panoptic segmentation across diverse data types.
dc.format.extent85
dc.identifier.olddbid211889
dc.identifier.oldhandle10024/194908
dc.identifier.urihttps://www.utupub.fi/handle/11111/17354
dc.identifier.urnURN:NBN:fi-fe20251222123460
dc.language.isoeng
dc.rightsfi=Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.|en=This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.|
dc.rights.accessrightsavoin
dc.source.identifierhttps://www.utupub.fi/handle/10024/194908
dc.subjectPanoptic segmentation, lidar images, pseudo-RGB, deep learning, RGB-trained models, cross-domain evaluation, PQ, mIoU
dc.titleEvaluating Deep Learning RGB-Based Panoptic Segmentation Models on LiDAR-Generated Images
dc.type.ontasotfi=Pro gradu -tutkielma|en=Master's thesis|

Tiedostot

Näytetään 1 - 1 / 1
Ladataan...
Name:
Adal_Sileshi_Thesis.pdf
Size:
8.01 MB
Format:
Adobe Portable Document Format