Skip to main content

Large Language Models and Computer-Assisted Language Learning: existing flaws and future potential

Speaker: Andrew Caines (Cambridge University, UK)

Abstract: Very large language models with billions or trillions of parameters (LLMs) currently dominate the NLP landscape, and recently they have been used for many tasks in CALL, such as essay grading, error correction, and content creation. Since LLMs outperform earlier models on many general NLP tasks, it can be tempting to view them as a ready solution for any domain, including education. But it is important to evaluate precisely how good they are on CALL-related tasks and understand where they fail or can be improved. I will give an overview of the work on CALL for English by the ALTA group in Cambridge: comparing the performance of LLMs and supervised models on various tasks, pointing out some of their strengths and weaknesses, and looking towards future developments towards explainability and so-called ‘baby’ LMs which are trained at a fraction of the cost of LLMs yet still maintain good levels of performance.

Dates

20 November 2025

Event type

Other

Open

Yes

Location

Room: J406