Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study

Loading...
Thumbnail Image
Access rights
openAccess
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
Date
2023-04-19
Major/Subject
Mcode
Degree programme
Language
en
Pages
19
Series
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23)
Abstract
Collecting data is one of the bottlenecks of Human-Computer Interaction (HCI) research. Motivated by this, we explore the potential of large language models (LLMs) in generating synthetic user research data.We use OpenAI’s GPT-3 model to generate open-ended questionnaire responses about experiencing video games as art, a topic not tractable with traditional computational user models. We test whether synthetic responses can be distinguished from real responses, analyze errors of synthetic data, and investigate content similarities between synthetic and real data. We conclude that GPT-3 can, in this context, yield believable accounts of HCI experiences. Given the low cost and high speed of LLM data generation, synthetic data should be useful in ideating and piloting new experiments, although any fndings must obviously always be validated with real data. The results also raise concerns: if employed by malicious users of crowdsourcing services, LLMs may make crowdsourcing of self-report data fundamentally unreliable.
Description
Keywords
Other note
Citation
Hämäläinen, P, Tavast, M & Kunnari, A 2023, Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study . in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23) ., 433, ACM, ACM SIGCHI Annual Conference on Human Factors in Computing Systems, Hamburg, Germany, 23/04/2023 . https://doi.org/10.1145/3544548.3580688