Authors:
Kevin Ponciano
1
;
Abrantes Filho
1
;
Jean-Rémi Bourguet
1
and
Elias de Oliveira
2
Affiliations:
1
Department of Computer Science, Vila Velha University, Vila Velha, Brazil
;
2
Postgraduate Program of Informatics (PPGI), Federal University of Espírito Santo, Vitória, Brazil
Keyword(s):
Autograder, Programming Activities, Criteria-Based Evaluation, Virtual Environments, Plagiarism Detection.
Abstract:
The evaluation of programming exercises submitted by a large volume of students presents an ongoing challenge for educators. As the number of students engaging in programming courses continues to rise, the burden of assessing their work becomes increasingly demanding. To address this challenge, automated systems known as autograders have been developed to streamline the evaluation process. Autograders recognize solutions and assign scores based on predefined criteria, thereby assisting teachers in efficiently assessing student programs. In this paper, we propose the creation of a comprehensive autograding platform in a Brazilian university by leveraging open-source technologies pioneered by prestigious universities such as Harvard, Carnegie Mellon, and Stanford. Job processing servers, interface components, and anti-plagiarism modules are integrated to provide educators with an evaluation tool, ensuring efficiency in grading processes and fostering enriched learning experiences. Thro
ugh data analysis of the students’ submissions, we aim to emphasize the platform’s effectiveness and pinpoint areas for future enhancements to better cater to the needs of educators and students.
(More)