Mapping the Impact of Biases in Large Language Model Chatbots on User Satisfaction
Vicente, Olayinka (2025-05-21)
Mapping the Impact of Biases in Large Language Model Chatbots on User Satisfaction
Vicente, Olayinka
(21.05.2025)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025053056331
https://urn.fi/URN:NBN:fi-fe2025053056331
Tiivistelmä
This thesis explores how biases in Large Language Model (LLM) chatbots have evolved over time and how they affect user satisfaction. Through a structured literature review of 89 peer-reviewed articles, the study maps the historical trajectory of chatbot technologies—from rule-based systems to advanced LLMs—and reviews the persistence and emergence of different bias types based on their sources. It also explores user trust, experience, and perceptions in relation to biased interactions, while assessing the evolution and limitations of various mitigation strategies.
The findings revealed that biases in chatbots often persists across generations and have grown more complex, with recent emergent behaviours linked to training data, algorithmic design, and human interaction. Additionally, despite the improvements in mitigation techniques, there are still some inconsistencies and ethical gaps that remain. This study contributes to AI fairness and user experience research by proposing an evolution-informed understanding of bias in chatbots, and offering recommendations for research in ethically aligned, user-sensitive chatbot development.
The findings revealed that biases in chatbots often persists across generations and have grown more complex, with recent emergent behaviours linked to training data, algorithmic design, and human interaction. Additionally, despite the improvements in mitigation techniques, there are still some inconsistencies and ethical gaps that remain. This study contributes to AI fairness and user experience research by proposing an evolution-informed understanding of bias in chatbots, and offering recommendations for research in ethically aligned, user-sensitive chatbot development.