Let's Ask AI About Their Programs : Exploring ChatGPT's Answers To Program Comprehension Questions
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2024-05-24
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
12
Series
Proceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training, pp. 221-232
Abstract
Recent research has explored the creation of questions from code submitted by students. These Questions about Learners' Code (QLCs) are created through program analysis, exploring execution paths, and then creating code comprehension questions from these paths and the broader code structure. Responding to the questions requires reading and tracing the code, which is known to support students' learning. At the same time, computing education researchers have witnessed the emergence of Large Language Models (LLMs) that have taken the community by storm. Researchers have demonstrated the applicability of these models especially in the introductory programming context, outlining their performance in solving introductory programming problems and their utility in creating new learning resources. In this work, we explore the capability of the state-of-the-art LLMs (GPT-3.5 and GPT-4) in answering QLCs that are generated from code that the LLMs have created. Our results show that although the state-of-the-art LLMs can create programs and trace program execution when prompted, they easily succumb to similar errors that have previously been recorded for novice programmers. These results demonstrate the fallibility of these models and perhaps dampen the expectations fueled by the recent LLM hype. At the same time, we also highlight future research possibilities such as using LLMs to mimic students as their behavior can indeed be similar for some specific tasks.Description
Publisher Copyright: © 2024 Owner/Author.
Keywords
QLCs, artificial intellegence, introductory programming, large language models, program comprehension
Other note
Citation
Lehtinen, T, Koutcheme, C & Hellas, A 2024, Let's Ask AI About Their Programs : Exploring ChatGPT's Answers To Program Comprehension Questions . in Proceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training . ACM, pp. 221-232, International Conference on Software Engineering, Lisbon, Portugal, 14/04/2024 . https://doi.org/10.1145/3639474.3640058