Student Perceived Trust in Math problem Solving in a Human-AI Collaboration : A Comparison Study to understand trust and interpersonal trust dynamics.
Farmahini Farahani, Shirin (2025-07-03)
Student Perceived Trust in Math problem Solving in a Human-AI Collaboration : A Comparison Study to understand trust and interpersonal trust dynamics.
Farmahini Farahani, Shirin
(03.07.2025)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
suljettu
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025070477772
https://urn.fi/URN:NBN:fi-fe2025070477772
Tiivistelmä
This master's thesis addresses the critical and underexplored area of student-perceived trust in mathematical problem-solving within human-AI collaboration, particularly with the rapid integration of ChatGPT into educational contexts. While advanced large language models like ChatGPT are rapidly integrating into educational contexts, there remains a significant gap in understanding the complex dynamics of student trust in AI-generated content and its subsequent impact on interpersonal trust when AI-assisted work is shared.
The research aims to identify key factors influencing students' trust in ChatGPT's mathematical solutions. It further seeks to understand how the correctness of ChatGPT-generated solutions impacts both human-AI (H-AI) trust, and human-human using AI (H-HAI) interpersonal trust, and to examine the relationship between specific types of errors (conceptual vs. calculation) and student trust in ChatGPT's mathematical competency. Additionally, the research explores how perceived transparency, or lack thereof, regarding ChatGPT usage influences (H-HAI) interpersonal trust concerning a student's mathematical abilities.
A quantitative approach was applied, utilizing a 45-item Likert-scale questionnaire administered to students with prior experience using ChatGPT in the University of Turku. Students evaluated ChatGPT-generated math solutions and assessed trust implications across both human-AI and human-human mediated by ChatGPT scenarios. The findings illuminate the complex interplay between correct and erroneous solutions (conceptual vs. calculation errors), the effect of perceived transparency or its absence, and the role of output quality in calibrating trust.
The research aims to identify key factors influencing students' trust in ChatGPT's mathematical solutions. It further seeks to understand how the correctness of ChatGPT-generated solutions impacts both human-AI (H-AI) trust, and human-human using AI (H-HAI) interpersonal trust, and to examine the relationship between specific types of errors (conceptual vs. calculation) and student trust in ChatGPT's mathematical competency. Additionally, the research explores how perceived transparency, or lack thereof, regarding ChatGPT usage influences (H-HAI) interpersonal trust concerning a student's mathematical abilities.
A quantitative approach was applied, utilizing a 45-item Likert-scale questionnaire administered to students with prior experience using ChatGPT in the University of Turku. Students evaluated ChatGPT-generated math solutions and assessed trust implications across both human-AI and human-human mediated by ChatGPT scenarios. The findings illuminate the complex interplay between correct and erroneous solutions (conceptual vs. calculation errors), the effect of perceived transparency or its absence, and the role of output quality in calibrating trust.
Samankaltainen aineisto
Näytetään aineisto, joilla on samankaltaisia nimekkeitä, tekijöitä tai asiasanoja.
-
Accountability as a Warrant for Trust: An Experiment on Sanctions and Justifications in a Trust Game
Setälä Maija; Lappalainen Olli; Ylisalo Juha; Herne Kaisa<p> <span>Accountability is present in many types of social relations; for example, the accountability of elected representatives to voters is the key characteristic of representative democracy. We distinguish between two ... -
Explaining trust and acceptance of AI in digital security : The influence of explainable AI on trust and technology acceptance in cybersecurity tools.
Haesen, Twan (24.01.2025)Explainable AI (XAI) has gained significant recognition in recent years, shaping the development of artificial intelligence (AI) systems by promoting transparency and interpretability. As AI-driven cybersecurity tools ...avoin