"You Always Get an Answer" : Analyzing Users' Interaction with AI-Generated Personas Given Unanswerable Questions and Risk of Hallucination

dc.contributor.authorKaate, Ilkka
dc.contributor.authorSalminen, Joni
dc.contributor.authorJung, Soon-Gyo
dc.contributor.authorXuan
dc.contributor.authorTrang Thi Thu
dc.contributor.authorHäyhänen, Essi
dc.contributor.authorAzem, Jinan Y.
dc.contributor.authorJansen, Bernard J.
dc.contributor.organizationfi=markkinointi|en=Marketing|
dc.contributor.organization-code1.2.246.10.2458963.20.50826905346
dc.converis.publication-id491881323
dc.converis.urlhttps://research.utu.fi/converis/portal/Publication/491881323
dc.date.accessioned2025-08-28T00:05:38Z
dc.date.available2025-08-28T00:05:38Z
dc.description.abstractWe investigated the presence and acceptance of hallucinations (i.e., accidental misinformation) of an AI-generated persona system that leverages large language models for persona creation from survey data in a 54-user within-subjects experiment. After interacting with the personas, users were given a task to ask the personas a series of questions, including an unanswerable question, meaning the personas lacked the data to answer the question. The AI-generated persona system provided a plausible but incorrect answer half (52%) of the time, and more than half of the time (57%), the users accepted the incorrect answer, and the rest of the time, users answered the unanswerable question correctly (no answer). We found that when the AI-generated persona hallucinated, the user was significantly more likely to answer the unanswerable question incorrectly. Also, for genders separately, when the AI-generated persona hallucinated, it was significantly more likely for the female user and the male users to answer the unanswerable question incorrectly. We identified four themes in the AI-generated persona's answers and found that users perceive AI-generated persona's answers as long and unclear for the unanswerable question. Findings imply that personas leveraging LLMs require guardrails to ensure that personas clearly state the possibility of data restrictions and hallucinations when asked unanswerable questions.
dc.format.pagerange1624
dc.format.pagerange1638
dc.identifier.isbn979-8-4007-1306-4
dc.identifier.olddbid205163
dc.identifier.oldhandle10024/188190
dc.identifier.urihttps://www.utupub.fi/handle/11111/54004
dc.identifier.urlhttps://doi.org/10.1145/3708359.3712160
dc.identifier.urnURN:NBN:fi-fe2025082790860
dc.language.isoen
dc.okm.affiliatedauthorKaate, Ilkka
dc.okm.discipline113 Computer and information sciencesen_GB
dc.okm.discipline512 Business and managementen_GB
dc.okm.discipline113 Tietojenkäsittely ja informaatiotieteetfi_FI
dc.okm.discipline512 Liiketaloustiedefi_FI
dc.okm.internationalcopublicationinternational co-publication
dc.okm.internationalityInternational publication
dc.okm.typeA4 Conference Article
dc.publisher.countryUnited Statesen_GB
dc.publisher.countryYhdysvallat (USA)fi_FI
dc.publisher.country-codeUS
dc.relation.conferenceInternational Conference on Intelligent User Interfaces
dc.relation.doi10.1145/3708359.3712160
dc.relation.ispartofjournalInternational Conference on Intelligent User Interfaces
dc.source.identifierhttps://www.utupub.fi/handle/10024/188190
dc.title"You Always Get an Answer" : Analyzing Users' Interaction with AI-Generated Personas Given Unanswerable Questions and Risk of Hallucination
dc.title.bookIUI '25: Proceedings of the 30th International Conference on Intelligent User Interfaces
dc.year.issued2025

Tiedostot

Näytetään 1 - 1 / 1
Ladataan...
Name:
3708359.3712160.pdf
Size:
963.11 KB
Format:
Adobe Portable Document Format