Hyppää sisältöön
    • Suomeksi
    • In English
  • Suomeksi
  • In English
  • Kirjaudu
Näytä aineisto 
  •   Etusivu
  • 1. Kirjat ja opinnäytteet
  • Pro gradu -tutkielmat ja diplomityöt sekä syventävien opintojen opinnäytetyöt (rajattu näkyvyys)
  • Näytä aineisto
  •   Etusivu
  • 1. Kirjat ja opinnäytteet
  • Pro gradu -tutkielmat ja diplomityöt sekä syventävien opintojen opinnäytetyöt (rajattu näkyvyys)
  • Näytä aineisto
JavaScript is disabled for your browser. Some features of this site may not work without it.

Case Study on Transfer Learning for ECG-Based Classification of Cardiac Conditions

Patino, Chito Lim (2025-06-18)

Case Study on Transfer Learning for ECG-Based Classification of Cardiac Conditions

Patino, Chito Lim
(18.06.2025)
Katso/Avaa
Patino_Chito_Thesis.pdf (1.064Mb)
Lataukset: 

Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
suljettu
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025062473286
Tiivistelmä
The electrocardiogram is an ever-important diagnostic tool amidst the continued rise of cardiovascular diseases across the globe. Ensuring its accuracy is just as significant. Machine learning (ML), particularly deep learning, fills the vital need for fast and reliable ECG interpretation. However, due to the lack of data, not all institutions have the capability to develop their own ML model. Transfer learning could address data availability limitations with its capability to tap the knowledge of existing trained models to create an improved model through fine-tuning on related target data.

This thesis examined transfer learning for its optimal application in ECG-based classification of cardiac conditions. Using residual network models, it sought to understand how data quantity, task similarity, and coding system differences affect transfer learning performance. The data size experiments confirmed the superior performance of fine-tuned models when training size is limited but also revealed a gradual loss of advantage and even worse performance than models trained from scratch as data became sufficiently large. Fine-tuned models showed stable class-level and general model performance and avoided sudden performance drops when data decreased. Meanwhile, reduced similarity between the source and target datasets degraded the accuracy of fine-tuned models especially when data is limited. Target classes absent in the source domain contributed mainly to the degraded performance. Furthermore, the fine-tuned models still achieved consistently high performance across varying target training sizes even when the source and target datasets used different diagnostic coding standards, as long as the diseases were the same. The performance lowered when data was limited and not all target classes were present in the differently labeled source dataset.

The experiments revealed that, while transfer learning could yield steadily high accuracy at different target data sizes, low task similarity could reduce fine-tuned model performance. Future studies could explore how deep fine-tuning, longer training time, and alternative deep learning architectures perform when dealing with dissimilar target and source domains. Domain adaptation could also be explored to bridge divergent source and target domains and improve transfer learning performance.
Kokoelmat
  • Pro gradu -tutkielmat ja diplomityöt sekä syventävien opintojen opinnäytetyöt (rajattu näkyvyys) [5097]

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste
 

 

Tämä kokoelma

JulkaisuajatTekijätNimekkeetAsiasanatTiedekuntaLaitosOppiaineYhteisöt ja kokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste