Europe is losing the AI train. While Brussels dispenses politically correct statements about human-centric AI, other zones of the world are headed undisturbed towards supremacy in Artificial (non-human, by the way) Intelligence. China is leading the race: its AI plan is duly structured until 2030 and relies on a competitive mass of data, passively provided by one of the largest national populations of the Planet. The European roadmap will be out in April and accordingly to observers the draft version is not very proming: plans are quite defensive, as most of the public statements on the topic.

Investments, proactivity and few legal barriers are fuelling the progress of AI outside Europe. Brussels can make use of some realism, understand where the others struggle and be a proactive force there. Examples of how algorithms and AI are discriminatory and biased are countless: Artificial Intelligence reproduces its programmers’ thinking. Voice recognition technologies trained and tested solely by men, struggle to understand female voices. Algorithms easily relate black people to crime. A study on Google found that ads for executive level positions were more likely to be shown to men than to women. The technocracy era produces an extreme polarisation, extended to most life aspects: speech, personal experience, belief. This extremism recreates the bias and the will of the limited pool of people in the cockpit, a social group reproducing the white male advantage in technology design. No matter its good faith, such group – as it is composed today – can’t help being poorly representative of the wider set of population being affected by its creations.

We are heading towards very exclusive islands of privilege: there will find a space those who have the means to navigate with awareness the technosphere, or who resemble to technology creators’ idea of the ‘average user’. A world of gated and atomised communities with no sense of the broader picture. Nothing entirely new, it is just accelerated by technology, by the very mean that was saluted as an emancipatory tool. The reproduction of prejudice and extremism in the digital sphere is not just a moral problem: biased technology produces missed opportunities with a real social and economic impact. It means excluding talents from recruitement; underestimating a market; misunderstanding the target client base. In this critical moment in history, diversity can be Europe’s competitive advantage. Again, this is not a form of bourgeois respectability. A recent report from the Boston Consulting Group provides interesting data based on 1700 companies about the value of diversity in business. The authors took into consideration as diversity factors six dimensions: gender, age, nation of origin (meaning employees born in a country other than the one in which the company is headquartered), career path, industry background, and education (meaning employees’ focus of study in college or graduate school). They found out that the higher the degree of diversity, the higher the innovation factor of the company, that they measure in terms of revenues related to new products and services.

As the Cambridge Analytica case unfolds, we see just another example of how private tech giants are influencial in the public sphere. We reached a point where they are obliged by public pressure to put in place at least some prevention and awareness measures. AI will play a big part in this game. Cassie Robertson suggests that fixing the social problem of technology corporations “It’s not really the kind of understanding you get from doing a bit of user research to design a digital product or service”: it requires calling to arms people who do have a really good understanding of society’s challenges, through having been working on them. It means involving in AI European talents developing alternative technologies and fighting for underrepresented social groups. Diversity, in a word.

A key ingredient of global platforms success is to leverage a critical mass. Sustainability and revenues are indeed guaranteed by masses of adopters, who make a deliberate use of a given product or service until it becomes an essential part of their life; at the same time, data providers (aware to different degrees of covering the role) fuel these platforms with free insights to constantly refine and improve the offer. The two often coincide, two faces of the same data-extractive medal. Data that trains machine learning and AI. Much of the European challenge ahead is all about mobilising a its critical mass in a different way. Which means, at citizens’ level, supporting the widespread and adoption of alternatives to Silicon Valley’s platforms and business models; in the control room, it means engaging in the technology design a diversity of professionals, coming from different backgrounds and representing a plurality of social groups.

Europe can be a third way between the US private monopolies of data and the Chinese Panopticon, installing a data sovereignty ecosystem holding the value it generates. To become such a system, Europe will need diversity at the steering wheel. A competitive advantage we have in Europe is precisely that we are a diverse pool of experiences, traditions and nationalities: we can use our background to highlight differences as populism wants, or we can use such differences as a strength.

 

Share This