Hyppää sisältöön
    • Suomeksi
    • In English
  • Suomeksi
  • In English
  • Kirjaudu
Näytä aineisto 
  •   Etusivu
  • 3. UTUCris-artikkelit
  • Rinnakkaistallenteet
  • Näytä aineisto
  •   Etusivu
  • 3. UTUCris-artikkelit
  • Rinnakkaistallenteet
  • Näytä aineisto
JavaScript is disabled for your browser. Some features of this site may not work without it.

"You Always Get an Answer" : Analyzing Users' Interaction with AI-Generated Personas Given Unanswerable Questions and Risk of Hallucination

Kaate, Ilkka; Salminen, Joni; Jung, Soon-Gyo; Xuan; Trang Thi Thu; Häyhänen, Essi; Azem, Jinan Y.; Jansen, Bernard J.

"You Always Get an Answer" : Analyzing Users' Interaction with AI-Generated Personas Given Unanswerable Questions and Risk of Hallucination

Kaate, Ilkka
Salminen, Joni
Jung, Soon-Gyo
Xuan
Trang Thi Thu
Häyhänen, Essi
Azem, Jinan Y.
Jansen, Bernard J.
Katso/Avaa
3708359.3712160.pdf (963.1Kb)
Lataukset: 

doi:10.1145/3708359.3712160
URI
https://doi.org/10.1145/3708359.3712160
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025082790860
Tiivistelmä
We investigated the presence and acceptance of hallucinations (i.e., accidental misinformation) of an AI-generated persona system that leverages large language models for persona creation from survey data in a 54-user within-subjects experiment. After interacting with the personas, users were given a task to ask the personas a series of questions, including an unanswerable question, meaning the personas lacked the data to answer the question. The AI-generated persona system provided a plausible but incorrect answer half (52%) of the time, and more than half of the time (57%), the users accepted the incorrect answer, and the rest of the time, users answered the unanswerable question correctly (no answer). We found that when the AI-generated persona hallucinated, the user was significantly more likely to answer the unanswerable question incorrectly. Also, for genders separately, when the AI-generated persona hallucinated, it was significantly more likely for the female user and the male users to answer the unanswerable question incorrectly. We identified four themes in the AI-generated persona's answers and found that users perceive AI-generated persona's answers as long and unclear for the unanswerable question. Findings imply that personas leveraging LLMs require guardrails to ensure that personas clearly state the possibility of data restrictions and hallucinations when asked unanswerable questions.
Kokoelmat
  • Rinnakkaistallenteet [27094]

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste
 

 

Tämä kokoelma

JulkaisuajatTekijätNimekkeetAsiasanatTiedekuntaLaitosOppiaineYhteisöt ja kokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste