Propagating Large Language Models Programming Feedback
Loading...
Access rights
openAccess
Journal Title
Journal ISSN
Volume Title
Conference article in proceedings
This publication is imported from Aalto University research portal.
View publication in the Research portal
View/Open full text file from the Research portal
Other link related to publication
View publication in the Research portal
View/Open full text file from the Research portal
Other link related to publication
Author
Date
2024-07-09
Department
Major/Subject
Mcode
Degree programme
Language
en
Pages
5
Series
Abstract
Large language models (LLMs) such as GPT-4 have emerged as promising tools for providing programming feedback. However, effective deployment of LLMs in massive classes and Massive Open Online Courses (MOOCs) raises financial concerns, calling for methods to minimize the number of calls to the APIs and systems serving such powerful models. In this article, we revisit the problem of 'propagating feedback' within the contemporary landscape of LLMs. Specifically, we explore feedback propagation as a way to reduce the cost of leveraging LLMs for providing programming feedback at scale. Our study investigates the effectiveness of this approach in the context of students requiring next-step hints for Python programming problems, presenting initial results that support the viability of the approach. We discuss our findings' implications and suggest directions for future research in optimizing feedback mechanisms for large-scale educational environments.Description
Publisher Copyright: © 2024 Owner/Author.
Keywords
computer science education, large language models, programming feedback
Other note
Citation
Koutcheme, C & Hellas, A 2024, Propagating Large Language Models Programming Feedback . in L@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale . ACM, pp. 366-370, ACM Conference on Learning @ Scale, Atlanta, Georgia, United States, 18/07/2024 . https://doi.org/10.1145/3657604.3664665