Designing and Building a Platform for Teaching Introductory Programming supported by Large Language Models
Loading...
URL
Journal Title
Journal ISSN
Volume Title
Perustieteiden korkeakoulu |
Master's thesis
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
2024-01-22
Department
Major/Subject
Security and Cloud Computing
Mcode
SCI3084
Degree programme
Master’s Programme in Computer, Communication and Information Sciences
Language
en
Pages
77
Series
Abstract
Large language models (LLMs) have the potential to improve programming education by providing feedback and guidance to students. Despite their potential benefits, the integration of LLMs into education presents unique challenges, including the risk of over-reliance on their feedback and the inconsistency of feedback quality. Addressing these concerns requires research to identify effective ways of integrating LLMs into programming education, which itself is challenging due to the rapid evolution of LLMs. To meet this challenge, this thesis introduces a flexible platform that can integrate multiple LLMs, providing an experimental space for research and innovative approaches to enhance programming education through LLMs. Guided by the Design Science Research Methodology framework, the thesis outlines the design, development, and evaluation of this educational platform. Conducted at Aalto University’s LeTech research group, the thesis presents an introductory programming learning platform specifically tailored to the group’s research objectives. The platform facilitates data collection, and enables the students to have a personalized learning experience with the help of LLM feedback. The work advances our understanding of LLMs in education and feedback mechanisms’ importance. The developed platform effectively demonstrates the feasibility of integrating LLMs into programming education. A small-scale study evaluating the platform’s overall usability received an average rating of 4.21 out of 5.00, while the LLM feedback received an average usefulness rating of 4.28 out of 5.00, highlighting its effectiveness and value in assisting students. Though the study sample size was small, the findings are encouraging. Future research could use the platform to explore multiple LLMs and conduct studies to improve the feedback mechanisms.Description
Supervisor
Hellas, ArtoThesis advisor
Koutcheme, CharlesKeywords
large language models, programming education, feedback, interactive learning environment