Designing for Appropriate Reliance: The Role of Cognitive Bias and Trust in AI-Assisted Decision-Making
Roos, Nick (2025-08-15)
Designing for Appropriate Reliance: The Role of Cognitive Bias and Trust in AI-Assisted Decision-Making
Roos, Nick
(15.08.2025)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe20251015101621
https://urn.fi/URN:NBN:fi-fe20251015101621
Tiivistelmä
Abstract
As artificial intelligence (AI) becomes increasingly integrated into decision-making processes, understanding how users interact with AI systems is critical to ensure effective and appropriate reliance. This study investigates the psychological mechanisms behind human-AI interaction by examining how AI recommendation strength and explainability affect decision accuracy, cognitive bias, and appropriate reliance on AI advice. A between-subjects experimental design embedded within an online survey was used to manipulate five conditions of AI support, ranging from no AI to highly authoritative AI with varying levels of explanation. The study draws on dual-process theory and appropriate reliance theory, incorporating trust in AI as a moderator and cognitive bias as a mediator.
Data were collected from 163 participants across five experimental conditions: 1) Control group, (2) AI no explanation, (3) AI with explanation, (4) AI with detailed explanation, and (5) Strong AI recommendation. Regression analyses, including Hayes’ mediation and moderation models (Models 2 and 4), were used to test the hypotheses. Also, a one-way ANOVA was conducted. Results reveal that trust in AI moderates the relationship between condition and appropriate reliance, and that cognitive biases, particularly automation bias and algorithm aversion, mediate this relationship. While direct effects of condition on reliance were limited, the indirect effects through cognitive bias were significant and robust. Additionally, excessive trust was associated with decreased decision accuracy and reliance calibration.
This thesis contributes to human–AI interaction literature by demonstrating that trust and cognitive biases critically influence user reliance on AI systems. While trust is generally beneficial, excessive trust can reduce appropriate reliance. Cognitive biases such as automation bias and algorithm aversion also negatively impact AI use. Practically, AI systems should support calibrated reliance through improved explainability, appropriate confidence signaling, and bias-reducing interfaces. The findings highlight AI reliance as both a technical and psychological challenge, calling for future research in real-world settings over extended periods.
As artificial intelligence (AI) becomes increasingly integrated into decision-making processes, understanding how users interact with AI systems is critical to ensure effective and appropriate reliance. This study investigates the psychological mechanisms behind human-AI interaction by examining how AI recommendation strength and explainability affect decision accuracy, cognitive bias, and appropriate reliance on AI advice. A between-subjects experimental design embedded within an online survey was used to manipulate five conditions of AI support, ranging from no AI to highly authoritative AI with varying levels of explanation. The study draws on dual-process theory and appropriate reliance theory, incorporating trust in AI as a moderator and cognitive bias as a mediator.
Data were collected from 163 participants across five experimental conditions: 1) Control group, (2) AI no explanation, (3) AI with explanation, (4) AI with detailed explanation, and (5) Strong AI recommendation. Regression analyses, including Hayes’ mediation and moderation models (Models 2 and 4), were used to test the hypotheses. Also, a one-way ANOVA was conducted. Results reveal that trust in AI moderates the relationship between condition and appropriate reliance, and that cognitive biases, particularly automation bias and algorithm aversion, mediate this relationship. While direct effects of condition on reliance were limited, the indirect effects through cognitive bias were significant and robust. Additionally, excessive trust was associated with decreased decision accuracy and reliance calibration.
This thesis contributes to human–AI interaction literature by demonstrating that trust and cognitive biases critically influence user reliance on AI systems. While trust is generally beneficial, excessive trust can reduce appropriate reliance. Cognitive biases such as automation bias and algorithm aversion also negatively impact AI use. Practically, AI systems should support calibrated reliance through improved explainability, appropriate confidence signaling, and bias-reducing interfaces. The findings highlight AI reliance as both a technical and psychological challenge, calling for future research in real-world settings over extended periods.