Welcome to Ever Not Quite, a newsletter about technology and humanism. This is the second (and final, for now) part of my previous post, although there is no need to view them in order.
As always, thank you for reading.
Before I begin, I should again emphasize that, notwithstanding their decisive framing, these theses represent an experiment in thinking and remain unevenly and incompletely developed. They are offered less as a set of tenets than as a list of preliminary observations that have counted for too little in the ongoing conversation about AI.
It will also be observed that many of the following theses amount to little more than common sense. This is quite true. My aim in giving voice to what is perhaps already common is to encourage the task of thinking through some observations of which we are all capable, insofar as what we seek surpasses the narrow perspective of technical expertise and touches, if only glancingly, something of a broader human wisdom.
New technologies often fit into pre-existing narratives which situate them within a larger story or interpretive framework that carries cultural or even religious significance.
Our imaginations often run ahead of our technical competence and create images and stories which help to bring about the worldly appearance of long-foreseen and eagerly-awaited technologies. When these technologies do eventually appear, they are designed to match the expectations established beforehand by the imagination. For a trivial example, think of the flip phones of fifteen years ago, which were designed to resemble the Star Trek “communicators” which first appeared in 1964. This observation is especially true of technologies that have been envisioned by philosophers and science fiction authors, and perhaps no technology has been more emblematic of this progression than artificial intelligence. Seldom has so much been written about a still-fictional technology as was written about artificial intelligence decades before it was capable of shaping actual human experiences.
In the twentieth century, AI found its place as a central feature of the much older project of achieving self-transcendence by means of technology. The word “transhuman”—which gave a name to the decidedly modern techno-scientific cultural movement in which artificial intelligence plays a central role—first entered the English lexicon in 1814 in a work by Henry Francis Carey: “Words may not tell of that transhuman change”, Carey wrote; the work was a translation of Dante’s Paradiso, and the word he had rendered was the Italian “trasumanar”, appearing in its first canto and signifying something like “passing beyond or exceeding human nature”. The modern transhumanist movement, which congratulates itself for having dispensed with so much theological simplicity while largely retaining the structure of the Christian narrative of redemption at the end of history, would do well to remind itself more often of the ultimately religious sources of its highest aspirations.
When we think about a new technology, we should remember to ask about its relationship to older narratives that have already informed our sense of it: What are its antecedents? What expectations does it fulfill, and what is the source of those expectations? What assumptions do those expectations encourage us to make?
New technologies make both “good” and “bad” uses equally possible, and the result is an escalatory spiral of dual-use.
The point about the moral ambiguity of technology is, I think, rather obvious, but less remarked upon is the tendency to try to prevent or ameliorate the “bad” uses of a technology with “good” uses of that same technology. This logic is perfectly captured by the credo: “The only thing that stops a bad guy with a gun is a good guy with a gun.” The argument implicit here, of course, is that the technology of the gun is fundamentally good or necessary, and that its misuse is best corrected by yet more use.
The open-ended capabilities of artificial intelligence make it an especially potent illustration of this point, because whatever it can be marshaled to do, it can also be used to undo or to prevent. In the AI discourse, this is sometimes called the “dual-use dilemma”: the work of LLMs like ChatGPT wielded dishonestly might be recognizable only to another AI program trained to spot it; AI-generated deepfakes might be identified with an AI-powered deepfake detector; an AI-enabled cyberattack may only be countered by employing AI in cyberdefense. OpenAI—the research laboratory that gave the world ChatGPT—tightens the coil with its intention to rely on AI to solve problems related to AI’s own alignment with human values. Their reasoning seems to be that artificial intelligence is inevitable, that potential problems arising from its development are best solved by yet more AI, and that the escalatory spiral that results is somehow necessary: Like the gun, artificial intelligence turns out to be the antidote to its own poison.
New technologies outlive the use for which they were designed, forming the next rung on a ladder of technological development.
Simply put, the purpose for which a technology is designed is usually of limited scope, while its power to initiate further change knows no natural limitations. It is precisely this open-ended intensification that makes the advance of technology such an effective means of improving the conditions of human life over the long-term. But this mismatch is also at least partly responsible for the frenetic and accelerating pace of change in modernity.
The case of nuclear weapons exemplifies this point: developed by the Manhattan Project in order to help bring the urgent crisis of the Second World War to an end, they came at the cost of producing a world which would forever live with the risks that nuclear technologies make possible. This observation is neither an argument for nor against the decisions that were actually made under the immense pressures of war; it only serves to illustrate a sequence that has often played out in the advance of modern technology: those who develop a new technology (or a new application of an existing technology) usually have a particular purpose in mind. Most often that purpose is some sort of consumer product or service aimed at maximizing shareholder value, although the Silicon Valley startup model has tended to sidestep the imperative of short-term profits in favor of actualizing some vision of a world transformed.
As a result, few questions are asked about what a new product or service may be used to do after success has been achieved—or, insofar as they are asked, the attendant challenges are accepted as the inevitable price of progress. These questions may concern the unintended consequences that might accompany success, or what the technology (or a subsequent technology) will be used to do years, decades, or centuries afterwards. Their answers are discovered only later by those who are actually impacted by them, long after the champagne bottles that celebrated a successful research project or product launch have been emptied.
In the case of artificial intelligence, there seems to be wide agreement about one thing: that the shape of its future development is obscure, even to the experts who have made it their task to forge ahead with its progress. Moreover, while artificial intelligence researchers have, to their credit, shown themselves to be unusually circumspect among tech developers in asking many profound questions about the technology they are bringing into being, they still aren’t asking enough of the right questions—and that is at least partly because the public at large hasn’t forced them to.
We should think of artificial intelligence as media.1
Much of the conversation about artificial intelligence has tended to consider it as an object, becoming preoccupied with philosophical questions concerning its nature: Is consciousness substrate-independent? If so, can artificial intelligence be conscious? Could it ever be considered an artificial lifeform and demand something like the moral regard that is still the sole distinction of natural creatures? These are indeed very interesting questions, and people do and will continue to disagree about their solutions. Much less prominent, however, is the conversation about how the human world is shaped by artificial intelligence in its many guises—and in particular the mediating role it plays between us and the institutions with which we deal, between us and other people, and between us and the world at large in our endeavors to know and understand it.
Of course, artificial intelligence is hardly the first media technology. Conceiving of it as media situates it in an ancient story that arguably began with the written word; indeed, we could say that one of the hallmarks of modernity is the escalation in the number of mediated as opposed to primary experiences we typically have in our daily life. The increasing use of AI by institutions—whether they are considering us for employment or for a loan, or assessing our criminality on the basis of the data points left in our wake as we move through the world—simply represents the hypermodern next step for what are by now centuries-old bureaucratic structures.
We have become accustomed to navigating the tortuous labyrinths of modern bureaucracies in search of a human before whom we might plead our case, and we also know the feeling of disappointment when, upon reaching them, we realize that they, too, are a part of the system they represent—that their interactions with us will be spoken in its language and will adhere to its protocols. We are perhaps less accustomed to encountering our familiars—colleagues, friends, family—mediated not just by means of communications technologies, but also by an artificial intelligence which actually supervises and steers the content of our correspondence. But we are coming to expect more and more that their messages to us have at least been augmented, if not entirely generated, by AI—if for no other reason than that we ourselves use AI in our interactions with them.
Large Language Models (LLMs) like ChatGPT also function in much the same way that more traditional news media does, mediating between us and the broader world and offering us an account of it—what is important in it, what is true about it, how to understand it. They can recount historical events, explain scientific theories, and summarize the arguments found in books, among many other things; in all of these cases, LLMs play the role of the expert, imposing an algorithmically polished and authoritative gloss upon a more ragged and chaotic reality.
Many have become suspicious of traditional media that interprets the world for them in ways they don’t trust—an attitude of frustration embodied by the exhortation to “do your own research”. During the Covid-19 pandemic, the competing slogan “follow the science” expressed a different, more trusting attitude towards media: given the fact that virtually nobody who used this phrase was a scientist with direct knowledge of all the research constituting “the science”, this expression endorsed the reliability—in the absence of a dependable alternative—of a mediated account of what scientific research was revealing.
As a new form of media, artificial intelligence is also another front in the media-wars, and we should expect future battles to be fought over how AI represents and transmits its accounts of reality, and whose interests it is perceived to serve in doing so.
As an object of discourse and of speculation, artificial intelligence is best understood as a kind of “hyperobject”.
I am appropriating the term from Timothy Morton, for whom hyperobjects are “massively distributed in time and space as to transcend spatiotemporal specificity”. Most relevant for my purposes, the hyperobject is nonlocal. That is, what we name with the two simple letters “AI” cannot easily be stated: it is a massive and sophisticated hybrid of manufactured materials—GPUs, fiber optic cables, etc.—networked together to rearrange existing information into novel configurations which is becoming increasingly entangled with older economic, cultural, and political structures, and which presents new challenges to the religious and philosophical dimensions of human living, thinking, and knowing.
This is a way of saying that, when we talk about artificial intelligence, the conversation has no natural edges or boundaries, and there is no significant area of life into which it does not flow: it has implications for every institution and system that currently oversees what counts for the functioning of the human world, and its applications in education, medicine, science, governance, military, and industry are as profound as they are uncertain; it is poised to intervene at every site of meaning in our lives, including how we relate to family, friends, and colleagues; it bestrides every future horizon we might wish to consider, ranging from the immediate-term to the unimaginably remote, possibly altering the course of human affairs in ways that are beyond imagining; its reach extends from everyday concerns about economic pressures like job displacement to the monumental possibility of history brought to an end—either through the creation of a utopia of automated bounty, or by literally killing all humans. And although the emergence of artificial intelligence challenges us with newly-urgent philosophical questions about the nature of consciousness, thought, and the soul, it ultimately remains grounded in and subject to the constraints imposed by its material substrates.
My intention here is not to overwhelm the reader by proclaiming in such an inflated register, only to convey some sense of the immensity—in every sense of the word—of AI’s recent foray into the world. The point is that, as an object of thought and discourse, artificial intelligence humbles the mind’s reasoning capacity—our power to comprehend in the sense of the original Latin “comprehendere”, which means something like “grasping it all together”. This may sound like an overly academic or theoretical observation, but it matters because it highlights something that is increasingly true in our era of ever-more complex and massive technological systems: that their magnitude (just one manifestation of which is the “interpretability” problem in artificial intelligence research) surpasses our resources even collectively to understand or control them.
More and more, the modern technological edifice acquires characteristics that once belonged to the natural or even the divine order: it appears to us as both given (when applied to modern technology, the word often used is “inevitable”) and inscrutable in its pronouncements, against which our faculties are relatively powerless, but to which we must at least partly reconcile ourselves if we wish to be at home in the world. As a result of this naturalization of what Lewis Mumford called the “Megamachine”, the technological projects of modernity are experienced less as human undertakings that contribute to a clearly articulated vision of the good life, and more as unruly forces of “nature” on which the whole human enterprise uneasily rides.
Thanks to David McDonald for making the importance of this point clear to me.
An excellent piece: well-written and compelling! I enjoyed that a lot.