Adapting organizational IT-solutions for local large language model support
Mäkelä, Tuukka (2025-06-04)
Adapting organizational IT-solutions for local large language model support
Mäkelä, Tuukka
(04.06.2025)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025063075758
https://urn.fi/URN:NBN:fi-fe2025063075758
Tiivistelmä
This research aims to broaden understanding on the topic of localized large language models (LLMs) in organizational contexts. As a technology, these can offer a lot of advantages, that are suitable for the corporate world. Most notably, these offer significant benefits for controlling security related aspects – as such a system can be ran completely in-house, without any organizational and potentially sensitive data leaving the full control of a given organization.
Due to the demands of especially higher-end LLMs, most organizations are not prepared and ready to start supporting them on their pre-existing hardware structures. Whilst it’s normal for most organizations to have deployed computing solutions of some degree within their whole organization – due to the unique and ML-heavy focus of LLM-based workloads, these may not be compatible, forcing organizations to re-think their IT-solutions in order to begin utilizing localized LLMs.
This research has been completed on the basis of a few different methodologies and stages. Firstly, the driving factors and key trends impacting these developments were investigated with the aid of a systematic gray-literature review. After understanding these factors, the research aimed to understand the functionalities and requirements of a few key tiers of LLMs, through the completion of practical benchmark runs, as well as third-party benchmark data.
On the basis of these findings, the research is completed with a list of suggestions and factors of different tiers of LLM-systems. These include factors such as requirements and benefits an organization may expect to achieve from such systems. These solutions have been made on the principles of a design science-based artifact – giving anyone seeking to utilize these findings a solid foundation for understanding and completing such a project in a sensible and cost-effective manner. The artifact built within this research includes methods for planning such projects, and details about phases and tasks which should be completed to ensure a successful deployment takes place.
Due to the demands of especially higher-end LLMs, most organizations are not prepared and ready to start supporting them on their pre-existing hardware structures. Whilst it’s normal for most organizations to have deployed computing solutions of some degree within their whole organization – due to the unique and ML-heavy focus of LLM-based workloads, these may not be compatible, forcing organizations to re-think their IT-solutions in order to begin utilizing localized LLMs.
This research has been completed on the basis of a few different methodologies and stages. Firstly, the driving factors and key trends impacting these developments were investigated with the aid of a systematic gray-literature review. After understanding these factors, the research aimed to understand the functionalities and requirements of a few key tiers of LLMs, through the completion of practical benchmark runs, as well as third-party benchmark data.
On the basis of these findings, the research is completed with a list of suggestions and factors of different tiers of LLM-systems. These include factors such as requirements and benefits an organization may expect to achieve from such systems. These solutions have been made on the principles of a design science-based artifact – giving anyone seeking to utilize these findings a solid foundation for understanding and completing such a project in a sensible and cost-effective manner. The artifact built within this research includes methods for planning such projects, and details about phases and tasks which should be completed to ensure a successful deployment takes place.