CONTEXTUALIZING DISCRIMINATION IN AI: MORAL IMAGINATION AND VALUE SENSITIVE DESIGN AS A FRAMEWORK TO STUDY AI DEVELOPMENT IN THE EU
Chirwa, Chiza (2021-12-07)
CONTEXTUALIZING DISCRIMINATION IN AI: MORAL IMAGINATION AND VALUE SENSITIVE DESIGN AS A FRAMEWORK TO STUDY AI DEVELOPMENT IN THE EU
Chirwa, Chiza
(07.12.2021)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2022041128185
https://urn.fi/URN:NBN:fi-fe2022041128185
Tiivistelmä
AI will continue to play a role in service provision by both public and private sector providers. These services sometimes border on fundamental rights such as the right not to be discriminated against. Commonly, most people hold the prevailing belief that data knows best and that algorithms ensure equality and fairness.
However, algorithms do discriminate and sometimes they perpetuate inequality. The paper is built on the premise that the primary source of discrimination in AI is human input and not the underlying AI technology. Moral imagination, or more accurately, the lack of it, may be responsible for non-technical bias in AI decision-making. Prohibition of discrimination is recognised as a fundamental value of the EU and it follows that AI systems must comply with EU regulations in their decision-making to prevent discrimination and in the process protect human dignity.
As concerns human dignity, algorithmic bias continues to be the main problem regarding automated decision-making. This bias, more often than not, is as a result of reinforcing some institutional and societal discrimination into AI systems in the development phase. This has the effect of continuing to perpetuate bias in the wider society when AI systems are used.
This paper takes a dogmatic approach in analyzing the EU value of prohibition of discrimination as it is interpreted in the design process of AI systems by using moral imagination and value sensitive design as a framework of investigation.
However, algorithms do discriminate and sometimes they perpetuate inequality. The paper is built on the premise that the primary source of discrimination in AI is human input and not the underlying AI technology. Moral imagination, or more accurately, the lack of it, may be responsible for non-technical bias in AI decision-making. Prohibition of discrimination is recognised as a fundamental value of the EU and it follows that AI systems must comply with EU regulations in their decision-making to prevent discrimination and in the process protect human dignity.
As concerns human dignity, algorithmic bias continues to be the main problem regarding automated decision-making. This bias, more often than not, is as a result of reinforcing some institutional and societal discrimination into AI systems in the development phase. This has the effect of continuing to perpetuate bias in the wider society when AI systems are used.
This paper takes a dogmatic approach in analyzing the EU value of prohibition of discrimination as it is interpreted in the design process of AI systems by using moral imagination and value sensitive design as a framework of investigation.