Hyppää sisältöön
    • Suomeksi
    • In English
  • Suomeksi
  • In English
  • Kirjaudu
Näytä aineisto 
  •   Etusivu
  • 1. Kirjat ja opinnäytteet
  • Pro gradu -tutkielmat ja diplomityöt sekä syventävien opintojen opinnäytetyöt (kokotekstit)
  • Näytä aineisto
  •   Etusivu
  • 1. Kirjat ja opinnäytteet
  • Pro gradu -tutkielmat ja diplomityöt sekä syventävien opintojen opinnäytetyöt (kokotekstit)
  • Näytä aineisto
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explaining trust and acceptance of AI in digital security : The influence of explainable AI on trust and technology acceptance in cybersecurity tools.

Haesen, Twan (2025-01-24)

Explaining trust and acceptance of AI in digital security : The influence of explainable AI on trust and technology acceptance in cybersecurity tools.

Haesen, Twan
(24.01.2025)
Katso/Avaa
Haesen_Twan_Thesis.pdf (1.403Mb)
Lataukset: 

Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025050838572
Tiivistelmä
Explainable AI (XAI) has gained significant recognition in recent years, shaping the development of artificial intelligence (AI) systems by promoting transparency and interpretability. As AI-driven cybersecurity tools become increasingly more widespread, understanding the influences of XAI on user trust and technology acceptance is crucial. This study examines the relationship between XAI, trust, and the acceptance of cybersecurity tools, aiming to determine whether the presence of XAI enhances trust and facilitates greater adoption of AI-driven security measures.
To investigate these relationships, an online survey was conducted with 52 valid respondents, ranging in age from 18 to 66 (M = 35.08, SD = 15.25). The questionnaire used scales established in previous research for measuring trust and technology acceptance, with participants randomly assigned to scenarios involving a cybersecurity AI tool for phishing with or without XAI-based explanations. The results indicate a significant positive relationship between trust and technology acceptance, reinforcing previous findings that trust plays a critical role in user adoption of AI technologies. However, contrary to expectations, the presence of XAI did not strengthen this relationship. This unexpected finding suggests that explanations provided by XAI may not always be intuitive or beneficial to users, potentially due to information overload, cognitive complexity, or the lack of a clear and actionable explanation format.
These findings highlight the role of XAI in cybersecurity applications and challenge the assumption that increased explainability always leads to higher trust. Instead, they suggest that the effectiveness of XAI in increasing trust may depend on factors such as the complexity of the AI model, the clarity of the explanations provided, and the technical expertise of the end user. This study contributes to the growing body of research on human-AI interaction, emphasizing the need for further investigation into the design of explainability methods that balance transparency, usability, and trustworthiness in cybersecurity AI. Future research should explore alternative XAI approaches, assess the impact of contextual factors, and examine how different user demographics respond to AI explanations in real-world security settings.
Kokoelmat
  • Pro gradu -tutkielmat ja diplomityöt sekä syventävien opintojen opinnäytetyöt (kokotekstit) [9162]

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste
 

 

Tämä kokoelma

JulkaisuajatTekijätNimekkeetAsiasanatTiedekuntaLaitosOppiaineYhteisöt ja kokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste