I have to admit that in the increasing appropriation and marketing of the “tech for good” debate by big tech, “tech for evil” approaches become very appealing. For instance, we can look into a desirable evolution of the technology presence in our life by speculating on the negative effects of human-computer-interaction, as the CHI4Evil research workshop did recently. Sometimes the answer to such reverse-engineering efforts is straightforward: this paper concludes that the only defense against killer AI is not developing it.
“If AI systems are effective, the pressure to increase the level of assistance to the warfighter would be inevitable. Continued success would mean gradually pushing the human out of the loop, first to a supervisory role and then finally to the role of a “killswitch operator” monitoring an always-on LAWS.”
Out of military-Armageddon flavour, it is quite evident that automating our decision-making processes will make them less aligned with the human notion of collective interest. We wrote about it here.
 
Meanwhile, last week 42 countries signed the OECD Principles on Artificial Intelligence, which has the merit of being the first set of intergovernmental and international policy guidelines on the hot topic of “good/trustworthy/fair/insert-keyword-here” AI. Like for many documents on the topic, action points are missing. To end on a positive note, here you can find a clever and well-documented reflection of what “AI for social good” could be, and how it could be linked to sustainable development goals (so that we optimise some international effort).
Share This