Automating Fantasies, Past and Present

olivia banner's picture

While the rhetoric around today’s big data-driven biomedical endeavors – the Precision Medicine Initiative, the 21st Century Cures Act, the Brain Research through Advancing Innovative Technologies Initiative – posits that our new technologies put us on the cusp of something entirely new, a similar feeling suffused the post-World War II US health care professions. Psychiatrists, like professionals in so many other fields, looked in wonder upon computers and began to imagine a future in which computation’s fantasized efficiency would ease the post-War rise in demand for their services. In the late 1960s multiple psychiatric journals ran special issues dedicated to automation and psychiatry. Bernard Glueck’s opening paper on the topic in Comprehensive Psychiatry, which he had delivered in his presidential address to that year’s American Psychopathological Association convention, began with a grand description of technological progress in human affairs – space travel facilitated by satellite transmissions, the wonders of television – and of explosive growth in information, population, and automation.1

Next, Glueck considered what automation might hold for psychiatry, a particularly significant concern because explosive population growth, he claimed, had led to an increase in schizophrenia. A novel class of pharmaceuticals had allowed psychiatric patients to be released into the general population, and the new fear, according to Glueck, was that they would have children, what he called “genetic poisoning.” By learning from other industries where computational technologies were already improving efficiency – he cited Wall Street, airline seating, and the insurance industries – psychiatry could surely mirror their successes. A key concern for Glueck was the possibility for nefarious uses of the massive datasets being accumulated throughout various government sectors, but he assured readers this could be mitigated through carefully screening for stability candidates for data management positions. This would happen, he said, by rating certain people according to their honesty, competence, and other measures of stability; then, by entering that data into a program that holds up to 100 variables, “We can develop models of both superior and inferior individuals” (449). Here, late 1960s psychiatry was already dreaming of machine learning, basing its dream in eugenic ideologies.

Fifty years later we are faced with big data analytics haunted by these past eugenic fantasies. With big data industries’ insistence that calculative methods can increase efficiencies, flag aberrations, create “choice architectures” by which to “nudge” people toward better behavior, and solve the genetic code, big data, while no longer explicitly concerned with “genetic poisoning,” is biopolitical in its aims, working toward “making live” a populace that can fuel late capitalism’s labor force needs.2 Critical humanistic perspectives on big data – already being explored in work by Jasbir Puar, Kelly Fritsch, Nadine Ehlers and Shiloh Krupar, and myself that follows the trail of data and data rhetoric as they construct debilitated populations — view big data as inimicable to projects that seek to develop alternative models of care.3

One of the Western epistemologies driving big data is that of total transparency, something that Édouard Glissant argues against.4 To combat that goal, we might need, following Glissant, to aim for strategic opacity instead.5 This could mean critical making where particular groups coalesce into data collectives, keeping their knowledge apart from the broader ecosystem of data brokering, or where collectives use methods developed by feminist, Black, and disability health activists to enable communities to care for themselves. Gynepunk Collective might serve as one example, as well as transgender data collectives; experiments in operating systems such as, which addresses the intransigent fact that Western data technology is, down to its core, implicated in what Sylvia Wynter has called the coloniality of Truth, might be another.6


1Bernard Glueck, “Automation and Social Change,” Comprehensive Psychiatry 8, no. 6 (1967): 441–49.

2Jasbir K. Puar, The Right to Maim: Debility, Capacity, Disability (Duke University Press, 2017).

3Puar; Kelly Fritsch, “Gradations of Debility and Capacity: Biocapitalism and the Neoliberalization of Disability Relations,” Canadian Journal of Disability Studies 4, no. 2 (2015): 12–48; Nadine Ehlers and Shiloh Krupar, “‘When Treating Patients Like Criminals Makes Sense’: Medical Hot Spotting, Race, and Debt,” in Subprime Health: Debt and Race in U.S. Medicine (Minneapolis, Minn.: University of Minnesota Press, 2017), 31–54; Olivia Banner, Communicative Biocapitalism: The Voice of the Patient in Digital Health and the Health Humanities (University of Michigan Press, 2017).

4Édouard Glissant, Poetics of Relation (University of Michigan Press, 1997).

5The term “strategic opacity” comes from Tyrone S. Palmer, “‘What Feels More Than Feeling?’: Theorizing the Unthinkability of Black Affect,” Critical Ethnic Studies 3, no. 2 (2017): 31–56.

6Sylvia Wynter, “Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, after Man, Its Overrepresentation–An Argument,” CR: The New Centennial Review 3, no. 3 (2003): 257–337.