20. Beyond Solutionism for a Responsible Artificial Intelligence

Katherine Fletcher, OpenStax at Rice University; Alexa Hagerty, University of Cambridge; Frederique Krupa, DIGITAL Design Lab, l'école de design Nantes Atlantique; Sarah Luger, Orange Silicon Valley

Posted: January 27, 2021

We propose a post-solutionist reflection on what is Responsible AI, to discuss interdisciplinary perspectives that contribute to making AI systems more fair, transparent, and inclusive. Solutionism is “recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized if only the right algorithms are in place!” (Morozov 2013)

Ethical problems in AI/ML have been exposed through numerous and influential publications (Buolamwini & Gebru 2018, Challen & al 2019, Parikh & al 2019, Yapo & Weiss 2018). How can we move from a view of ethics and responsibility narrowly focused on technical definitions of bias, discrimination, and fairness in AI systems to one which considers interconnecting issues of cognitive bias, social inequality, white supremacy, intersectionality, geopolitical and economic divides?

A robust consideration of ethics and responsibility requires that we interrogate how technologies are interwoven with our social structures, histories, and moral imaginations (Noble 2018; Benjamin 2019). We must go beyond tech solutionism to frame problems without automatically assuming AI/ML is the best or inevitable course of action, to fully explore fine-grain signals and our complete range of options, which can include “do nothing.”

Topics to be addressed include power and ubiquity of automated systems in daily life, over-trust in technical solutions, and whether our current production methodologies and research approaches support or hinder just and equitable AI systems.



Published: 01/01/2021