The advantages of context specific language models: the case of the Erasmian Language Model

Joao Fernando Ferreira Goncalves, Nick Jelicic, Michele Murgia, Evert Stamhuis

Research output: Working paperPreprintAcademic

8 Downloads (Pure)

Abstract

The current trend to improve language model performance seems to be based on scaling up with the number of parameters (e.g. the state of the art GPT4 model has approximately 1.7 trillion parameters) or the amount of training data fed into the model. However this comes at significant costs in terms of computational resources and energy costs that compromise the sustainability of AI solutions, as well as risks relating to privacy and misuse. In this paper we present the Erasmian Language Model (ELM) a small context specific, 900 million parameter model, pre-trained and fine-tuned by and for Erasmus University Rotterdam. We show how the model performs adequately in a classroom context for essay writing, and how it achieves superior performance in subjects that are part of its context. This has implications for a wide range of institutions and organizations, showing that context specific language models may be a viable alternative for resource constrained, privacy sensitive use cases.
Original languageEnglish
PublisherarXiv
DOIs
Publication statusPublished - 13 Aug 2024

Research programs

  • ESHCC M&C

Erasmus Sectorplan

  • Sectorplan SSH-Breed

Fingerprint

Dive into the research topics of 'The advantages of context specific language models: the case of the Erasmian Language Model'. Together they form a unique fingerprint.

Cite this