Hyppää sisältöön
    • Suomeksi
    • In English
  • Suomeksi
  • In English
  • Kirjaudu
Näytä aineisto 
  •   Etusivu
  • 3. UTUCris-artikkelit
  • Rinnakkaistallenteet
  • Näytä aineisto
  •   Etusivu
  • 3. UTUCris-artikkelit
  • Rinnakkaistallenteet
  • Näytä aineisto
JavaScript is disabled for your browser. Some features of this site may not work without it.

Peatland pixel-level classification via multispectral, multiresolution and multisensor data using convolutional neural network

Zelioli, Luca; Farahnakian, Fahimeh; Middleton, Maarit; Pitkänen, Timo P.; Tuominen, Sakari; Nevalainen, Paavo; Pohjankukka, Jonne; Heikkonen, Jukka

Peatland pixel-level classification via multispectral, multiresolution and multisensor data using convolutional neural network

Zelioli, Luca
Farahnakian, Fahimeh
Middleton, Maarit
Pitkänen, Timo P.
Tuominen, Sakari
Nevalainen, Paavo
Pohjankukka, Jonne
Heikkonen, Jukka
Katso/Avaa
1-s2.0-S1574954125002420-main.pdf (6.545Mb)
Lataukset: 

Elsevier BV
doi:10.1016/j.ecoinf.2025.103233
URI
https://doi.org/10.1016/j.ecoinf.2025.103233
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025082789800
Tiivistelmä

High-resolution mapping of boreal peatlands is crucial for greenhouse gas inventories, ecological monitoring, and sustainable land management. However, accurately classifying peatland ecotypes at large scales remains challenging due to the complex phenological changes, dense tree canopies, water table level variations, and the mosaiced structure of vegetation communities typical of these landscapes. To address these challenges, we propose a novel multi-modal convolutional neural network (CNN) architecture designed specifically for pixel-level peatland classification. The motivation behind this research stems from the need for improved accuracy in peatland site type and fertility level mapping, which is vital for effective environmental decision-making. The core strategy of our method involves a late fusion architecture that seamlessly integrates multi-source remote sensing (RS) data, including optical imagery, synthetic aperture radar (SAR), airborne laser scanning (ALS), and multi-source national forest inventory (MS-NFI) datasets. These diverse data sources, characterized by different spatial resolutions, are fused to preserve their spatial integrity, enabling richer feature extraction for classification tasks. Additionally, a sliding-window approach is applied to manage multi-resolution datasets, enhancing pixel-wise classification by preserving spatial and contextual relationships. We evaluated the proposed architecture across three diverse peatland zones in Finland, demonstrating its capability to generalize across varying ecological conditions. Experimental results indicate classification accuracies for peatland site types and fertility levels ranging from 36.6% to 55.0%, highlighting the effectiveness of our approach even with limited labeled training samples. Canopy height models, Sentinel-2 bands, and Sentinel-1 bands emerged as the most influential data sources for accurate classification. Our findings underscore the potential of integrating multi-source RS data with advanced CNN architectures for large-scale peatland mapping. Future work will focus on incorporating LiDAR-derived vegetation structural indices, hyperspectral RS data, and expanding the training dataset to further enhance classification performance.

Kokoelmat
  • Rinnakkaistallenteet [27094]

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste
 

 

Tämä kokoelma

JulkaisuajatTekijätNimekkeetAsiasanatTiedekuntaLaitosOppiaineYhteisöt ja kokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste