Real-time Embedded Human Activity Recognition in a Military Context: an Exploratory Investigation

dc.contributor.authorDeenen, Tristan
dc.contributor.departmentfi=Tietotekniikan laitos|en=Department of Computing|
dc.contributor.facultyfi=Teknillinen tiedekunta|en=Faculty of Technology|
dc.contributor.studysubjectfi=Tietotekniikka|en=Information and Communication Technology|
dc.date.accessioned2024-10-16T21:05:14Z
dc.date.available2024-10-16T21:05:14Z
dc.date.issued2024-10-14
dc.description.abstractThe EU Defence Industry funds numerous research projects, including the LODESTAR project. In this thesis, we focus on exploring the feasibility of developing real-time, body-worn artificial intelligence capable of classifying military activities, which is a crucial component in LODESTAR. Furthermore, we aim to explore the similarities between civilian and soldier activity recognition. To this extent, we have identified the most important distinguished in activity, namely aiming versus not aiming, and recorded a dataset named LODESTAR v2. This dataset consists of 6 civilians and 3 soldiers, where activities such as stand, aim standing, crouch, aim crouching, kneel, aim kneeling, walk and run were recorded for a total of 427 minutes of recording time. Next, we perform a state-of-the-art literature review concerning embedded models. Our selected model, fine-tuned with a gridsearch, an FCN, achieves an accuracy score of 91.52% and F1-score of 91.50%, respectively. Additionally, the model meets the exploratory hardware constraints for embedded usage. Hence, the human activity recognition in this scenario can likely be solved with low-power, on-body equipment. Despite the model’s ability to distinguish between activities like aiming and non-aiming, significant inter-person variance was observed. Differences between civilian and soldier data highlight the need for more high-quality field data from soldiers to enhance model generalization. Furthermore, while our study demonstrates that a small model suffices in classification power, optimization techniques for the model have not been extensively explored, and could cut hardware requirements significantly further.
dc.format.extent75
dc.identifier.olddbid196089
dc.identifier.oldhandle10024/179136
dc.identifier.urihttps://www.utupub.fi/handle/11111/24982
dc.identifier.urnURN:NBN:fi-fe2024101680628
dc.language.isoeng
dc.rightsfi=Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.|en=This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.|
dc.rights.accessrightssuljettu
dc.source.identifierhttps://www.utupub.fi/handle/10024/179136
dc.subjecthuman activity recognition, deep learning, fully convolutional network, wearable sensors, low-power devices, edge computing, military
dc.titleReal-time Embedded Human Activity Recognition in a Military Context: an Exploratory Investigation
dc.type.ontasotfi=Diplomityö|en=Master's thesis|

Tiedostot

Näytetään 1 - 1 / 1
Ladataan...
Name:
Deenen_Tristan_Thesis.pdf
Size:
12.31 MB
Format:
Adobe Portable Document Format