• Aucun résultat trouvé

Policy Approaches to Socio-technical Causes of Bias in Algorithmic Systems — What Role can Ethical Standards Play?

N/A
N/A
Protected

Academic year: 2022

Partager "Policy Approaches to Socio-technical Causes of Bias in Algorithmic Systems — What Role can Ethical Standards Play?"

Copied!
1
0
0

Texte intégral

(1)

Policy approaches to socio-technical causes of bias in algorithmic systems – what role can

ethical standards play?

Ansgar Koene University of Nottingham

Despite warnings about Bias in Computer Systems going back to at least 1992, it has taken concerted efforts by groups of researchers like FAT/ML (Fair- ness, Accountability and Transparency in Machine Learning) and news and me- dia reports highlighting discriminatory effects in data driven algorithms (e.g.

recidivism prediction) to dispel the na¨ıve-optimistic myth that the logic/mathe- matics of computation would automatically result in objectively unbiased out- comes.

Starting from a brief overview of key challenges in the design and use of data driven algorithmic decision making systems I will focus in on the inherently socio-technical nature of real-world applications that give rise to concerns about bias.

Against this background I will discuss the use-cases and design framework currently under consideration in the IEEE P7003 Standard for Algorithmic Bias Considerations working group, with comparison to the findings coming out of the UnBias project on the lived experience and concerns of teenaged digital na- tives regarding their every-day interactions with algorithmically mediated online media.

I will conclude with a review of some of the policy making initiatives that were recently launched by professional associations (e.g. ACM principles; IEEE Global Initiative on Ethical Considerations in AI and Autonomous Systems), industry led organizations (e.g. Partnership on AI) and regional/national gov- ernment bodies (e.g. EC Algorithmic Awareness Building; TransAlgo in France;

UK parliamentary inquiries).

Ansgar Koene is Senior Research Fellow at the Horizon Institute for Digital Economy Research at the University of Nottingham. He is Co-Investigator on the UnBias project whose goal is to emancipate users against algorithmic biases for a trusted digital economy. Ansgar is chair of the IEEE working group for the development of the Standard for Algorithmic Bias Considerations.

Références

Documents relatifs

The best possible (minimax) growth rate of the regret, as a function of the number T of rounds, is then determined by the interplay among the richness of the policy class,

If several candidate vicinities present a missing minutia with respect to their template vicinities, we may suspect that a minutia common to these vicinities was omitted

This section briefly presents a case study in managing a shared living space as a common pool resource, where the design and development of a socio-technical system could benefit

Predictive machine learning (ML) models trained using historical data sets of news media that contain political, social, economic, cultural and geographic attributes of a country –

The notion of systemic bias is identified, characterized, and described as an emergent relation resulting from the combination of other factors between

We perform two different sets of experiments. In the first set, we examine the effect of the preference ratios, and in the second set, the effect of group and category sizes. Note

Figure 1: Bias disparity for model-based recommendation algorithms. The x-axis is the top 10 most preferred categories for male and female on training data and y-axis is bias

We will consider a case study of liquid feedback, a direct democ- racy platform of the German pirate party as well as models of (non- )discriminating systems. As a conclusion of