It is okay to be disoriented by the number of publications and guidelines about AI ethics. Since 2016, they have been booming. The inventory of AI ethics guidelines compiled by AlgorithmWatch lists over 160 of them, and it is continuously updated. Much has been said on the topic, but little of this overproduction has led to concrete outputs, clarity and binding international agreements. The very same focus on ethics is the problem here.

The latest report by Access Now, Europe’s Approach to Artificial Intelligence: How AI Strategy is Evolving, is a handy compass to make the point in this year-end. Following the origin and diffusion of the notion of trustworthy AI, it traces a recent history of AI regulatory approaches.

In April 2019, the High-Level Expert Group (HLEG) on AI’s Ethics Guidelines define a trustworthy AI system one that is:
1. lawful ​ – respecting all applicable laws and regulations
2. ethical ​ – respecting ethical principles and values
3. robust ​ – both from a technical perspective while taking into account its social environment.

Since then, the notion took off in private companies policies and international agreements. Perhaps the most significant impact of the EU approach can be seen in the use of the term “Trustworthy AI” in the AI ethics principles developed by the Organisation for Economic Co-operation and Development (OECD).

A portion of the report is dedicated to the critics to the HLEG group work, which points the finger to larger problems. First, the group composition: out of 56 experts in total, there were 37 industry representatives. Business interests and the Trojan horse of “unleashing innovation” led to requests such as remove the phrase ‘non-negotiable’ from the document. IMHO the very definition of “expert” is problematic when dealing with the social impact of technology: indeed, by professionalising the answer to societal challenges, such groups tend to exclude communities and professionals who may not deal with AI (or replace it with the tech at stake), but who have a vast knowledge of the problems it is supposed to solve, and of the consequences of its indiscriminate application.

The other prominent critic addressed to HLEG, and I’d say to Europe in general, is the weak approach to AI: the suspect that the focus on ethics has been used as a way to dodge regulation. The simple yet powerful argument of Access Now is that AI regulation should be grounded into fundamental rights and social justice, instead of ethics, because the latter is not legally binding.

In commenting the White Paper on AI, the NGO argues that by adopting a risk-based approach, the Commission has reversed its priorities: “the primary objective of regulating AI should be to protect and promote fundamental rights enshrined in the Charter, to avoid individual and societal harms, not to promote AI uptake and then to try and mitigate any harms caused.” I’d add that besides escaping law enforcement, ethics does not account for the structural socio-economic inequalities that AI reflects. Framing AI as a technical system that needs to comply with human values can only lead to posthumous technical fixes. The problem is not bias, but injustice. By focusing on social justice, fundamental rights are the base to start developing technology from, and it is everybody’s interest to include in decision-making the voices of those who are at the bottom of the social ladder.

Share This