skip to main content

Betwixt and Between Augmentation and Automation: Some Reflections after Watching “Sully”

Phaedra Daipha

19 December, 2016

One need not have seen the recently released film “Sully” to know about the Miracle on the Hudson. The January 2009 emergency landing of US Airways Flight 1549 on New York’s Hudson River has become branded in the mind of most of us, and it only takes a photograph to bring back the rush of emotions as we watched the event unfold on our screens to its felicitous conclusion, all 155 people on board miraculously safe and sound. But the film delivers a thrilling behind-the-scenes look at what transpired that, if not quite offering a new perspective, certainly offers much food for thought for STS scholars.  

It is on the analytically evocative side of “Sully”, not its shortcomings, that I would like to focus here. Much has already been written about the film’s unfair portrayal of the National Transportation Safety Board that, as per protocol, investigated the accident. And some aviation experts have rightly pointed out that the miracle on the Hudson cannot be explained until we recast aircraft piloting as a human-machine interface. I have touched upon both of those issues elsewhere. Here, I want to think with the film and concentrate on the topic that serves as its poignant backdrop: the vanishing role of the human operator in high risk decision-making environments.  

Nobody will debate that the role of the human operator in high risk decision-making environments has been changing, sometimes drastically, to keep pace with the big strides in computing power and machine learning. But, as I show in my book on weather forecasting operations at the National Weather Service, keeping or bringing humans in the loop is a constant organizational struggle, and certain institutions find themselves forever locked betwixt and between augmentation and automation.

Although frustratingly oblivious to the distributed cognition of cockpit life, “Sully” excels at fleshing out the human side of risk management. We get a front-and-center view of the dizzying intensity and variety of (cognitive, emotional, material, organizational, social) cues through which decision makers must navigate to successfully carry out their job. Watching Sully at work, I was very much reminded of the kind of “disciplined improvisation” I observed during my fieldwork with operational meteorologists. Once again, it is not strict adherence to protocol but drawing on (and trusting) one’s lived, embodied experience that saves the day. What is being missed by Bourdieusian and other top-down explanations of decision-making practice is that skill acquisition is an ongoing purposeful pursuit by human operators. In order to be able to effectively confront the inevitable messiness of concrete (routine as well as non-routine) decision-making situations, they have equipped themselves over time with a more or less institutionalized set of heuristics and techniques that help transform the decision-making task into a meaningfully tractable problem. Consider the often-cited quote by the real life Chesley “Sully” Sullenberger: “...for 42 years, I've been making small, regular deposits in this bank of experience, education, and training. And on January 15 the balance was sufficient so that I could make a very large withdrawal.”

This, then, is the question begged: What does the balance of today’s airplane pilots look like? With human rather than mechanical error now the primary cause or contributing factor to aviation accidents, what are the heuristics and techniques afforded to pilots by the aviation industry so that they can effectively equip themselves to confront the complexity and uncertainty of their task? Indeed, are they kept in the loop?

Importantly, whereas operational meteorologists are culturally primed to be what I have called “weather observation omnivores,” airplane pilots are primed to be information univores. Operational meteorologists have been institutionally encouraged to develop an appetite for a veritable smorgasbord of cues about the weather, while pilots (echoed by Sully in the film) regard any cue external to the predefined task as a (non-essential or essential) “distraction”. Such is the magnitude of risk and error proneness associated with operating an aircraft, that the aviation industry has instituted a “sterile cockpit rule” and incorporated increasing levels of automation in aircraft design and use. Yet, while highly effective, this approach to managing complexity is no less fraught with pitfalls. The tendency, as Sully laments in the film, to take “the humanity out of the cockpit” has also translated in inadequate and unrealistic air crew training. The majority of recent accident and incident reports identify pilot complacency and lack of situational awareness as a primary culprit. In an effort to mitigate this automation paradox, the aviation industry has redoubled human-in-the-loop simulation studies and training. Meanwhile, over at the National Weather Service, the specter of automation and, with it, “meteorological cancer” is looming closer once again as the agency considers changes to its operational mission and structure.

The ever increasing task and workflow automation of decision-making infrastructures has forced adaptive complex systems to constantly reinvent the role of their human operators – indeed, to question the need for any human operators at all. Both the weather forecasting and the aviation industries struggle with this dilemma, albeit currently from opposite sides of the information omnivores-univores spectrum. It bears keeping in mind, however, that it took a skilled human as well as a skilled machine for the Miracle on the Hudson to happen. If we cannot afford to remove humans from the hot seat, then it is time we designed knowledge infrastructures that treat human judgment and decision making as an asset rather than a liability, as a distinct skill set to be nurtured and empowered rather than subordinated to the powers of the machine. Or else we cannot expect human operators to actually keep themselves in the loop.

 

 

 

Phaedra Daipha is a cultural sociologist working at the intersection of science and technology studies, organizational analysis, and social theory. Her research agenda centers on the nature, practice, and institutions of knowledge production, with an eye toward understanding the development and transformation of systems of expertise and the emergence of new forms of coordinated action. Her recent book, Masters of Uncertainty: Weather Forecasters and the Quest for Ground Truth, draws on several years of immersive fieldwork at the U.S. National Weather Service to develop a new framework for the process of uncertainty management. She is currently working on a book that examines uncertainty management at longer time scales based on a comparative study of hospital cardiology.

Comments

  • Google+
  • LinkedIn

Backchannels / Reviews

Critical reviews of media, technology, literature, and performances. These are not intended to be book reviews of STS scholarship, but STS-based reviews of cultural works, including material culture, outside of our field.