Automated Essay Scoring Using Large Language Models | HackerNoon
Briefly

Current automated essay scoring (AES) efforts are shifting towards a multi-metric approach that evaluates essays on granular aspects like cohesion, grammar, and vocabulary, rather than relying on a single overall score. This methodology addresses the subjective nature of essay evaluation and aims to provide meaningful feedback to students, which is currently lacking in existing automated systems. By focusing on six specific metrics—cohesion, syntax, vocabulary, phraseology, grammar, and conventions—the goal is to create a more robust model that avoids over-fitting.
Despite significant funding and development in automated essay scoring, the field faces inherent challenges due to the subjective evaluation of essays. This subjectivity complicates the labeling of data sets, which is necessary for training reliable models. As a result, training data for AES systems remains limited compared to other NLP tasks, such as machine translation and named entity recognition. To overcome these challenges, AES research is evolving toward assessing essays based on multiple features, improving the evaluation process.
Read at Hackernoon
[
|
]