Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation

dc.contributor.authorMyntti, Amanda
dc.contributor.authorHenriksson, Erik
dc.contributor.authorLaippala,Veronika
dc.contributor.authorPyysalo, Sampo
dc.contributor.organizationfi=data-analytiikka|en=Data-analytiikka|
dc.contributor.organizationfi=digitaalinen kielentutkimus, espanja, italia, kiina, ranska, saksa|en=Digital Language Studies, Chinese, French, German, Italian, Spanish|
dc.contributor.organizationfi=tietotekniikan laitos|en=Department of Computing|
dc.contributor.organization-code1.2.246.10.2458963.20.36764574459
dc.contributor.organization-code1.2.246.10.2458963.20.68940835793
dc.contributor.organization-code1.2.246.10.2458963.20.85312822902
dc.converis.publication-id506457520
dc.converis.urlhttps://research.utu.fi/converis/portal/Publication/506457520
dc.date.accessioned2026-01-27T09:58:57Z
dc.date.available2026-01-27T09:58:57Z
dc.description.abstract<p>Pretraining data curation is a cornerstone in Large Language Model (LLM) development, leading to growing research on quality filtering of large web corpora. From statistical quality flags to LLM-based labelling systems, datasets are divided into categories, frequently reducing to a binary: those passing the filters are deemed as valuable examples, others are discarded as useless or detrimental. However, a more detailed understanding of the contribution of different kinds of texts to model performance is still largely lacking. In this article, we present the first study utilising <em>registers</em> or <em>genres</em>—a widely used standard in corpus linguistics to model linguistic variation—to curate pretraining datasets and investigate the effect of register on the performance of LLMs. We train small generative models with register classified data and evaluate them using standard benchmarks, and show that the register of pretraining data substantially affects model performance. We uncover surprising relationships between the pretraining material and the resulting models: using the <em>News</em> register results in subpar performance, and on the contrary, including the <em>Opinion</em> class, covering texts such as reviews and opinion blogs, is highly beneficial. While a model trained on the entire unfiltered dataset outperforms those trained on datasets limited to a single register, combining well-performing registers such as <em>How-to-Instructions</em>, <em>Informational Description</em>, and <em>Opinion</em> leads to major improvements. Furthermore, analysis of individual benchmark results reveals key differences in the strengths and drawbacks of specific register classes as pretraining data: <em>How-to-Instructions</em> excels at physical reasoning and sentence completion while barely crossing random baselines on world-knowledge benchmarks, while <em>Narrative</em> boosts performance on social interaction tasks but struggles with scientific questions. These findings show that register is an important explainer of model variation and can facilitate more deliberate and detailed future data selection practices.<br></p>
dc.identifier.olddbid214371
dc.identifier.oldhandle10024/197389
dc.identifier.urihttps://www.utupub.fi/handle/11111/39291
dc.identifier.urlhttps://openreview.net/forum?id=FqXXtSZWEZ
dc.identifier.urnURN:NBN:fi-fe202601217301
dc.language.isoen
dc.okm.affiliatedauthorMyntti, Amanda
dc.okm.affiliatedauthorHenriksson, Erik
dc.okm.affiliatedauthorLaippala, Veronika
dc.okm.affiliatedauthorPyysalo, Sampo
dc.okm.discipline113 Computer and information sciencesen_GB
dc.okm.discipline6121 Languagesen_GB
dc.okm.discipline113 Tietojenkäsittely ja informaatiotieteetfi_FI
dc.okm.discipline6121 Kielitieteetfi_FI
dc.okm.internationalcopublicationnot an international co-publication
dc.okm.internationalityInternational publication
dc.okm.typeD3 Conference Article
dc.publisher.countryCanadaen_GB
dc.publisher.countryKanadafi_FI
dc.publisher.country-codeCA
dc.relation.conferenceConference on Language Modeling
dc.source.identifierhttps://www.utupub.fi/handle/10024/197389
dc.titleRegister Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation
dc.title.bookProceedings of the Second Conference on Language Modeling, COLM 2025
dc.year.issued2025

Tiedostot

Näytetään 1 - 1 / 1
Ladataan...
Name:
823_Register_Always_Matters_AmandaMyntti.pdf
Size:
1.71 MB
Format:
Adobe Portable Document Format