Skip to content

elhele/AI-assisted-short-answer-grading

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RnD project "AI-assisted-short-answer-grading"

The process of teaching is very challenging and takes considerable time and effort. In many areas, especially computer science, it requires keeping the curriculum and assignments up to date and relevant. Personal communication with students and giving feedback on tests and homework is also essential as well as time-consuming. These activities are the most visible to students, but there are many more things a teacher or professor must do, such as paperwork, organizational issues, to name just a few. For this reason it is important to optimize the time dedicated to each different aspect of work. Keeping course content relevant as well as personal communication cannot be easily outsourced, but grading tests and assignments is usually straightforward enough and does not require a professor's expertise. Students often make similar mistakes, making the task well-suited for automation using machine learned models. This clearly shows that it is an ideal task to be automated.

Assignments and test answers include different aspects, such as text, pictures and programming code. In this work we focus on the text. There are several main text answer types: "fill the gap", short as well as long answer essays. The first type does not represent a complex problem because there would normally be only one possible answer and the only challenges that one can face here are handwriting recognition, if the test was not on a computer; or spelling correction, if the incorrectly spelled answers can be accepted. Two other types are more challenging from the text analysis point of view. Automated essay scoring (AES) focuses on evaluating texts longer than one paragraph, whereas automated short answer grading (ASAG) concentrates on answers consisting of a small number of sentences. The other important difference is that ASAG only concerns the with the meaning of the answer, while for AES the writing style is also very important.

AES is a rather complex task. Some massive open online courses (MOOC), such as those offered by EdX, MIT and Harvard's non-profit organization HarvardX, and organizations like ETS have already started using the automated essay grading systems. This is a logical step, since most of the online courses are either free or very cheap in comparison to the ones in real universities and institutes and it is impossible to find many teachers who are willing to work for free. They use an algorithm developed by one of the contestants of "The Hewlett Foundation: Automated Essay Scoring" competition on Kaggle and an EASE library, which can be used for machine learning text classification. Although AES has been received rather positively, there are still many people who have strong doubts about the feasibility of automated essay grading. Many people, including even Noam Chomsky, have signed a petition "Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment". Moreover, there were a lot of papers criticizing either state of the art AES or its general appropriateness.

Good AES research requires a solid computational linguistics background. Moreover, this area is more relevant for the humanities. This research concentrates on developing an AI assistant for grading assessments and exams in the fields of computer science, electrical engineering, physics and other technical disciplines. Here the answers are usually shorter and more concrete. Furthermore, stylistics and spelling are not of interest, but that depends on the individual professor. Therefore this work focuses on ASAG. This task is viable and the solution should be helpful for both professors and students. It can allow professors more time for other teaching activities and provide students with faster and less biased feedback on their work.