Privacy and Political Trust in the Age of Artificial Intelligence – Interview with Carsten Ochs

Christian S. Ritter

Dr. Carsten Ochs is a lecturer and postdoctoral researcher at the Sociological Theory Department at the University of Kassel where he is involved in the research project “Forum Privatheit – Artificial Intelligence, Privacy and the Evolution of Democracy” (funded by the Federal Ministry of Research and Education / BMBF). Throughout his academic life, his empirical research has focused on the production and use of science and technology. Carsten Ochs reflects in this interview on contemporary cultures on the Internet, Artificial Intelligence, and democracy.

Your work has focused on, among other things, the role of Internet technologies in democratic societies. In what ways does the Internet facilitate participation in political life?

The answer to this question is not straightforward, because the Internet plays different roles in different historical and local contexts. When we look at how Internet technologies and the World Wide Web were portrayed in the 1990s, we may come across a certain aspiration that the material structure of the Internet tends to have democratizing effects. Consider the ideas of the German playwright Bertolt Brecht who stated already by the end of the 1920s that radio infrastructure would foster democratic communication if only receivers, i.e., people, could really talk back (many-to-many instead of broadcasting). My impression is that we find similar ideas in early internet studies as conducted by Howard Rheingold, Barry Wellman, Sherry Turkle, and Manuel Castells, for example. The novel digitally mediated options to connect seemed to have created plenty of opportunities to re-invent social relations and formations, politics, and subjectivities. And I do not think that they were wrong at all. They simply conceptualized what the Internet was at that time.

But in the 2000s, it became clear that the potential many-to-many structure of the Internet might as well be haunted by network effects that kind of re-monopolize communication. I came across ideas similar to ‘network effects’ for the first time in Laszlo Barabasi’s work,  although the general notion, as far as I know, goes back to Robert Metcalfe. And as Shoshana Zuboff has shown, around the same time Internet companies started to learn how to use data for shaping user behavior. So today, we have filter bubbles and echo chambers on the Internet; we have a whole industry that thrives on data-based surveillance, prediction, and selling behavioral control mechanisms. Unfortunately, while the Internet in the 1990s served as a tool to increase the number of options for action, it is just the opposite today. Participation on the Internet, therefore, tends to occur only in reduced form: We may contribute and produce data in order to receive social agency, so to speak, but we do not have a say in the techno-economic shaping of the infrastructures themselves. This is really unfortunate, because those infrastructures, in large part, perform the sociotechnical rules of today’s (digital) communities.

So, of course, political participation via the Internet is still possible. Still the general trend in the data economy is rather to enable non-participation or a very weak form of participation that precludes people to participate in shaping the material infrastructure and thus defining the rules of digital sociality. And it’s still all about shaping technology/building society. Thus, we need to regulate digital infrastructures in order to enable full-blown political participation democratically.

In a recent co-authored article, which appeared in Science Technology and Human Values, you examined, alongside your collaborators Barbara Büttner and Jörn Lamla, the back-end data economy of a health and fitness platform. To what extent can such data be described as private in the German context?

I do not think it is possible to classify data as ‘private’ or ‘public’. My research in Germany exemplifies that this distinction is not always useful. In fact, German jurisdiction in data and information was based on a kind of ‘onion’ model, acting on the assumption that data may be assigned to three different spheres: a personal sphere of intimacy, a close social, and a public sphere. But in the 1980s, the Federal Constitutional Court shifted away from this idea, stating that any data whatsoever may take effect in a sensitive manner. And I think that there is no disagreement in STS debates on this score. The performativity of data counts, so the question is not some intrinsic quality of data (e.g., their public or private ‘nature’), but who can do what with which data. Taking this into account, I conceptualize privacy in a relational fashion: It occurs as a demarcation practice between actors who relate to each other; Privacy, on the one hand, serves to establish an experiential realm for an actor, a room for maneuver that offers contingent courses of action. On the other hand, to establish this room for maneuver, it is necessary to limit the way related agencies may partake in the actor’s features. Whatever data is processed at the back-end of some platform, even if it is just very general inferred data or probabilistic data based on statistics, the crucial question is this: may the data be used to generate information about actors and steer actors’ courses of action according to data agencies’ own interests, thus closing down actors’ room for maneuver? This is precisely what happens when predictive analytics are used to make ‘users’ vote for a particular party; and from a democratic point of view we might find this problematic, don’t we? So when is it, and when is it not legitimate to steer people’s future courses of action in such a way? I’d say that this is digital societies’ central privacy question.

In what ways does your current research on artificial intelligence in political life contribute to recent STS debates?

The advancement of artificial intelligence (AI), and the way it is currently deployed in the data economy, seems to intensify the constellation that I just tried to sketch. The social sphere is transformed into a giant learning environment for Artificial Neural Networks. In a sense, we are re-configured from being ‘users’ – people who use technology – to impulse generators for machine learning. By saying this, I do not mean to demonize AI at all. I simply want to account for the socio-technical and techno-economic constellation of cutting-edge AI. Our research aims to analyze this constellation. It forms part of a larger interdisciplinary project funded by the German Federal Ministry of Education and Research called Forum Privatheit. In the project, we (the sociology section including Jörn Lamla and myself) have collaborated since 2014 with computer science, information systems, legal scholars, philosophers, psychologists, and data protectionists in order to figure out how current digital societies function. We want to identify problems (in a pragmatist sense), study them, and find suggestions on how to deal with these problems. So our role in the project is a digital STS contribution to the analysis of surveillance capitalism, I’d say, but also to the long-standing STS debate on participation. In our current research on the ‘AI Arena’, we use Adele Clarke’s situational analysis to account for the many voices constituting the arena but also to include in the picture implicated actors and layers of silence. We want to find methods to account for all these contributions to the arena that tend to have no discursive voice. We want to make these actors’ material and distributed participation in the arena negotiations visible, even if these contributions or tracing of these is mediated via spokespersons.

What are your current and upcoming projects?

I have worked for quite some time on privacy. For my second book (called a habilitation thesis in Germany), I develop a Sociology of Privacy. The manuscript is currently under review, and I hope to publish it by next year. Moreover, I hope the book helps me get hold of a full professorship, which is pretty much the only way in Germany to have a permanent position at the university. Be it that I will remain within academia, my future research will focus on privacy’s seeming counterpart, the digital public. I have already explored the field to a certain degree, and I hope to come up with the first results by the end of the year. Still, my research focus remains the same: understanding all the different facets of digital sociality and culture, i.e., its constituting practices, subjectivities, infrastructures, and politics.