Evaluating the Performance of Code Generation Models for Solving Parsons Problems with Small Prompt Variations

Loading...
Thumbnail Image
Access rights
openAccess
Journal Title
Journal ISSN
Volume Title
A4 Artikkeli konferenssijulkaisussa
Date
2023-06-29
Major/Subject
Mcode
Degree programme
Language
en
Pages
7
299-305
Series
ITiCSE 2023 - Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education
Abstract
The recent emergence of code generation tools powered by large language models has attracted wide attention. Models such as OpenAI Codex can take natural language problem descriptions as input and generate highly accurate source code solutions, with potentially significant implications for computing education. Given the many complexities that students face when learning to write code, they may quickly become reliant on such tools without properly understanding the underlying concepts. One popular approach for scaffolding the code writing process is to use Parsons problems, which present solution lines of code in a scrambled order. These remove the complexities of low-level syntax, and allow students to focus on algorithmic and design-level problem solving. It is unclear how well code generation models can be applied to solve Parsons problems, given the mechanics of these models and prior evidence that they underperform when problems include specific restrictions. In this paper, we explore the performance of the Codex model for solving Parsons problems over various prompt variations. Using a corpus of Parsons problems we sourced from the computing education literature, we find that Codex successfully reorders the problem blocks about half of the time, a much lower rate of success when compared to prior work on more free-form programming tasks. Regarding prompts, we find that small variations in prompting have a noticeable effect on model performance, although the effect is not as pronounced as between different problems.
Description
Publisher Copyright: © 2023 Owner/Author.
Keywords
academic integrity, ai, artificial intelligence, chatgpt, code generation, code writing, codex, computer programming, copilot, CS1, deep learning, generative ai, GitHub, GPT-3, introductory programming, large language models, machine learning, ML, natural language processing, neural networks, novice programming, openAI
Other note
Citation
Reeves, B, Sarsa, S, Prather, J, Denny, P, Becker, B A, Hellas, A, Kimmel, B, Powell, G & Leinonen, J 2023, Evaluating the Performance of Code Generation Models for Solving Parsons Problems with Small Prompt Variations . in ITiCSE 2023 - Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education . ACM, pp. 299-305, Annual Conference on Innovation and Technology in Computer Science Education, Turku, Finland, 08/07/2023 . https://doi.org/10.1145/3587102.3588805