Journalists and experts from across Europe contributed to the new Algorithm Watch annual report Automating Society, covering automated decision making systems deployment in 16 countries. For each, you can find a description of the most (ab)used ADMS and related resources; for some countries – e.g. Italy, also data about the public opinion on automation. An appraisal of the European Union situation completes the profile. It is the most comprehensive research on automated decision-making (ADM) systems conducted in Europe so far. As pointed out by Fabio Chiusi in the introduction, face recognition is the most worryingly diffused one:
This is arguably the newest, quickest, and most concerning development highlighted in this report. Face recognition, nearly absent from the 2019 edition, is being trialed and deployed at an alarming rate throughout Europe. In just over a year since our last report, face recognition is present in schools, stadiums, airports, and even in casinos. It is also used for predictive policing, to apprehend criminals, against racism, and, regarding the COVID-19 pandemic, to enforce social distancing, both in apps and through “smart” video-surveillance.
Striving for democratic control
In one year, the use of ADMS increased exponentially. While ADMS already affect all sorts of activities and judgments, they are still mainly deployed without any meaningful democratic debate. The main fight for the coming years is around the democratic control of ADMS. Algorithm Watch has three policy suggestions for it:
Increase the transparency of ADM systems
Without the ability to know precisely how, why, and to what end ADM systems are deployed, all other efforts for the reconciliation of fundamental rights and ADM systems are doomed to fail. The suggested instruments are public registers for ADM systems used within the public sector, and legally-binding data access frameworks to support and enable public interest research;
Create a meaningful accountability framework for ADM systems
As findings from Spain and France have shown, even if the transparency of an ADM system is required by law and/or information has been disclosed, this does not necessarily result in accountability. Further steps are needed to ensure that laws and requirements are actually enforceable. The suggestions here are to develop and establish approaches to effectively audit algorithmic systems; support civil society organizations as watchdogs of ADM systems; ban face recognition that might amount to mass surveillance;
Enhance algorithmic literacy and strengthen public debate on ADM systems
More transparency of ADM systems can only be truly useful if those confronted with them, such as regulators, government, and industry bodies, can deal with those systems and their impact in a responsible and prudent manner. In addition, those affected by these systems need to be able to understand, where, why, and how these systems are deployed. This is why we need to enhance algorithmic literacy at all levels, with important stakeholders as well as the general public, and to reinforce more diverse public debates about ADM systems and their impact on society. The recommendations are to establish independent centers of expertise on ADMS, and to promote an inclusive and diverse democratic debate around ADM systems.
Acting before the damage is done
The recommendations boil down to creating intermediaries that can represent citizens and look – or allow to look, into black boxes. I am afraid the third point is the most essential: we need more citizens to be aware. Some 160 people connected to the webinar presenting the report, the number is derisory compared to the amplitude of the problem, which can’t remain a specialists’ problem. Secondly, without diminishing the vital importance of oversight and advocacy, it comes in at a later stage, when the damage is done. Some recommendations of the report I authored for Digital Future Society about the gender impact of ADMS in the digital welfare are valid beyond gender and applicable to the design of inclusives public ADMS: namely, to review the way datasets are built, because they carry a bias weight with them; to co-design with citizens; to conduct impact assessment ahead of ADMS deployment; and finally, to search for ways to embed equality, for instance by planning for desired scenarios.