skip to main content

AI’s Prediction Problem

Steve G. Hoffman

04 July, 2017

Artificial Intelligence is finding hype again. Big money has arrived from Google, Elon Musk, and the Chinese government. Global cities like Berlin, Singapore and Toronto jockey to become development hubs for application-based machine intelligence. AlphaGo’s victories over world class Go players make splashy headlines far beyond the pages of IEEE Transactions. Yet in the shadows of the feeding frenzy, a familiar specter haunts. Bill Gates and Stephen Hawking echo the worries of doomsayer futurists by fretting over the rise of superintelligent machines that might see humanity as obsolete impediments to their algorithmic optimization.

There is a familiar formula to all this. AI has long struggled with a prediction problem, careening between promises of automating human drudgery and warnings of Promethean punishment for playing the gods. Humans have been imagining, and fearing, their thinking things for a very long time. Hephaestus built humans in his metal workshop with the help of golden assistants. Early modern era art and science are filled with brazen heads, automated musicians, and an infamous defecating duck. [2] The term “robot” came into popular use in the midst of European industrialization thanks to Karel Čapek’s play, Rossum’s Universal Robots, which chronicled the organized rebellion of mass produced factory slaves. Robot, not coincidently, is derived from the Old Church Slavonic “rabota,” which means “servitude.” Overall, then, we find thinking machines in myth and artifact built to glorify gods, to explain the mystery of life, to amuse, to serve, and to punish. They were, and are, artifacts that test the limits of technical possibility but, more importantly, provide interstitial arenas wherein social and political elites work through morality, ethics, and the modalities of hierarchical domination.

Contemporary AI was launched with a gathering of mathematicians, computer engineers, and proto-cognitive scientists at the Dartmouth Summer Workshop of 1956. The workshop proposal named the field and established an expectation that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” The work that followed in the wake of this workshop institutionalized a tendency toward overconfident prediction. In 1966, workshop alum and co-founder of the MIT AI Lab, Marvin Minsky, received a summer grant to hire a first-year undergraduate student, Gerald Sussman, to solve robot vision. Sussman didn’t make the deadline. Vision turned out to be one of the most difficult challenges in AI over the next four decades. The vision expert Berthold Horn has summarized, “You’ll notice that Sussman never worked in vision again." [3]

Expectations bring blessing and curse. Horn is among the now senior figures in AI who believe that predictions were and are a mistake for the field. He once pleaded with a colleague to stop telling reporters that robots would be cleaning their house within 5 years. “You’re underestimating the time it will take,” Horn reasoned. His colleague shot back, “I don’t care. Notice that all the dates I’ve chosen were after my retirement!” [3]

Researchers at the Future of Humanity Institute at Oxford have recently stitched together a database of over 250 AI predictions offered by experts and non-experts between 1950 and 2012. Their main results yield little confidence in the forecasting abilities of their colleagues. [1]

  1. On the whole, predictions focused on the arrival of near human-level or “general” AI are no better than random guesses. Predictions show little convergence, with arrival dates ranging over one hundred and thirty years, from 1970 to 2100. The standard deviation is 26 years.
  2. Expert practitioners in AI (σ = 26 years) prove little better than non-expert observers (σ = 27 years), such as journalists, philosophers, and critics.
  3. There is a very strong tendency to predict that human-level AI will arrive within 15-25 years from the date of the prediction. This time frame is closely correlated with the time-to-retirement for the prediction maker.
  4. Those who make predictions, particularly when prediction is a central aspect of their professional identity (e.g. Raymond Kurzweil), are highly predisposed to rate their own predictions as having come true. 

The study suggests the key patterns to watch out for when people make predictions about the technological future. They also demonstrate that prediction is a fool’s errand. Be that as it may, prediction is most popular at moments, like now, in which an AI technique attracts the financial interests of commercial investors and government agencies. As was the case when the hoopla around expert systems hit a wall of commercial doubt in the early 1990s, this is unlikely to end well for most working AI scientists.

None of this is to suggest that AI hasn’t found tremendous success in recent years. Game playing systems, like checkers, chess, Scrabble, Jeopardy!, have long proven to be a wildly successful space for “grind techniques” that harness the sheer recall power of computer hardware. The recent success of AlphaGo is a bit of departure, however, because the system used reinforcement learning to teach itself new strategies (combined, of course, with sheer computational horsepower). Other notable areas of success that would once have been considered a miracle of the gods include hearing aids with algorithms for filtering out ambient noise (my 7-year-old wears one!), natural language processing and translation, object recognition, and pattern matching systems that we take for granted on Google, Amazon, iTunes, and the like.

The field’s contribution to the stock of scientific knowledge on intelligence is far less clear than these engineering feats, -- a theme I treat as a professional dilemma among academic AI scientists in my article in Science, Technology, & Human Values. Here, however, I linger on the way that success in AI raises a fundamental problem of interpretation, popularly known in AI circles as the Tesler Theorem. The computer scientist Larry Tesler, of cut/copy/paste fame, has stated, “Intelligence is whatever machines haven't done yet.” [4] Similarly, AI founding figure John McCarthy is widely quoted as having said, “As soon as it works, no one calls it AI anymore.”

Tesler and McCarthy can’t entirely blame their frustration on the fickleness of non-specialist audiences, however. My ethnography demonstrates that AI scientists rely upon this far reaching interpretive flexibility, both for inspiration and to attract funders. Academic AI trades on its status as an “edge science” with the outsider appeal of potential systems that exist only in theory. To remain at the leading edge, AI scientists fix their gaze toward the horizon - toward those capacities that they know computers can’t quite pull off yet. Knowing well that they cannot tackle the entire problem, researchers focus on features they can leverage while bracketing the rest with vague promissory. Therefore, prediction in AI - some scoped and sober, some not so much – is not only the stock-in-trade of undisciplined futurists proclaiming a coming cataclysm. It also infuses the relatively anodyne vernacular of grant proposals and lab demos.

If we think about the notion of “intelligence” less in terms of an individual cognitive agent and more in terms of distributed cognitive networks that mediate human and non-human joint activity, then AI can be seen to have surpassed “on board” human intelligence long ago. My tendency is to view this as an obvious manifestation of the long-term human settlements that arose with the Neolithic Revolution some 12,000 years ago. If we restrict the focus to computational augmentations, however, then surly we passed a key threshold in the early 1990s with the mass popularization of the internet. Few see the Mosaic web browser as the dawn of AI, but it sure was instrumental in the realization J.C.R. Licklider’s dream of a galactic “man-computer symbiosis.”

The results of this mass mind have been decidedly mixed. The internet dramatically collapses social distance – an essential tool for bottom-up struggle against social exclusion and oppression. Yet it has also supercharged the Orwellian tactics of an anti-science, anti-free press plutocrat at the highest reaches of America’s democracy. The primary content offering of our man-computer symbiosis, however, is a massive surplus of consumer advertisement and hardcore pornography. As futurists worry about the future of artificial intelligence, humanity keeps producing more and more natural stupidity.

Will our future society be the inheritance of our AI-based, self-replicating, superintelligent children? A safe bet is “yes,” although catastrophic climate chaos, ocean garbage patches, and a real possibility for nuclear warfare are clearly more pressing catastrophes. Will our superintelligent children be malevolent or friendly? Inspired or vapid? Probably all of the above, just as the internet has proven. A bit of historical perspective suggests that cortical prosthetics will become as taken for granted, and put to as all-too-human of uses, as wearing form-fitting jogging shoes and prescription eyeglasses. My best guess is several aspects of the current boundary between pure/prosthetic and real/simulation will seem rather arcane in the foreseeable future. Let’s go with anywhere from 15 to 25 years. But we are not there yet.

Meanwhile, AI continues to serve as an art and a science of human reflection that bounces around between self-congratulation and self-flagellation, between a fantastical future and existential threat, serving as a vector for our collective worries about who we are and what we fear to become.

References

[1] Armstrong, Stuart, Kaj Sotala, and Seán S. ÓhÉigeartaigh. 2014. "The errors, insights and lessons of famous AI predictions–and what they mean for the future." Journal of Experimental & Theoretical Artificial Intelligence 26:317-342.

[2] Cohen, John. 1966. Human Robots in Myth and Science. London: George Allen & Unwin Ltd. .

[3] Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York, NY: Basic Books.

[4] Hofstadter, Douglas R. 1999 [1979]. Godel, Escher, Bach: An Eternal Golden Braid. New York, NY: Basic Books.

Steve G. Hoffman is an assistant professor of Sociology at the University of Toronto and author of the recent ST&HV article "Managing Ambiguities at the Edge of Knowledge: Research Strategy and Artificial Intelligence Labs in an Era of Academic Capitalism." His areas of interest include social theory, science and technology studies, cultural sociology, political sociology, and comparative ethnography.

Comments

  • Google+
  • LinkedIn

Backchannels / Reflections

Commentary on the current and future state of the field or subfields within science and technology studies. Can include interviews, meditations on particular concepts or methods, biography / autobiography, essays, and other more personal and less formal writings.