Hyppää sisältöön
    • Suomeksi
    • In English
  • Suomeksi
  • In English
  • Kirjaudu
Näytä aineisto 
  •   Etusivu
  • 3. UTUCris-artikkelit
  • Rinnakkaistallenteet
  • Näytä aineisto
  •   Etusivu
  • 3. UTUCris-artikkelit
  • Rinnakkaistallenteet
  • Näytä aineisto
JavaScript is disabled for your browser. Some features of this site may not work without it.

Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation

Myntti, Amanda; Henriksson, Erik; Laippala,Veronika; Pyysalo, Sampo

Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation

Myntti, Amanda
Henriksson, Erik
Laippala,Veronika
Pyysalo, Sampo
Katso/Avaa
823_Register_Always_Matters_AmandaMyntti.pdf (1.708Mb)
Lataukset: 

URI
https://openreview.net/forum?id=FqXXtSZWEZ
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe202601217301
Tiivistelmä

Pretraining data curation is a cornerstone in Large Language Model (LLM) development, leading to growing research on quality filtering of large web corpora. From statistical quality flags to LLM-based labelling systems, datasets are divided into categories, frequently reducing to a binary: those passing the filters are deemed as valuable examples, others are discarded as useless or detrimental. However, a more detailed understanding of the contribution of different kinds of texts to model performance is still largely lacking. In this article, we present the first study utilising registers or genres—a widely used standard in corpus linguistics to model linguistic variation—to curate pretraining datasets and investigate the effect of register on the performance of LLMs. We train small generative models with register classified data and evaluate them using standard benchmarks, and show that the register of pretraining data substantially affects model performance. We uncover surprising relationships between the pretraining material and the resulting models: using the News register results in subpar performance, and on the contrary, including the Opinion class, covering texts such as reviews and opinion blogs, is highly beneficial. While a model trained on the entire unfiltered dataset outperforms those trained on datasets limited to a single register, combining well-performing registers such as How-to-Instructions, Informational Description, and Opinion leads to major improvements. Furthermore, analysis of individual benchmark results reveals key differences in the strengths and drawbacks of specific register classes as pretraining data: How-to-Instructions excels at physical reasoning and sentence completion while barely crossing random baselines on world-knowledge benchmarks, while Narrative boosts performance on social interaction tasks but struggles with scientific questions. These findings show that register is an important explainer of model variation and can facilitate more deliberate and detailed future data selection practices.

Kokoelmat
  • Rinnakkaistallenteet [29337]

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste
 

 

Tämä kokoelma

JulkaisuajatTekijätNimekkeetAsiasanatTiedekuntaLaitosOppiaineYhteisöt ja kokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy

Turun yliopiston kirjasto | Turun yliopisto
julkaisut@utu.fi | Tietosuoja | Saavutettavuusseloste