Welcome to Ever Not Quite, a newsletter about technology and humanism. This post grew out of my contribution to a panel on artificial intelligence at St. John’s College, Santa Fe earlier this month. Although my remarks were originally intended for that unique audience, I am posting them here with only minor modification and augmentation. A forthcoming post will expand on this list with several additional theses.
As always, thank you for reading.
This post is a diffuse list of five theses about artificial intelligence that I thought were important to gather into one place. It isn’t an exhaustive accounting of every significant consideration pertaining to the technology, but an attempt to place emphasis on features of the current conversation about AI that have garnered too little of our attention.
I should mention at the outset that I am uncomfortable with the very notion of presenting “theses”—second only to the manifesto as a format for the self-assured and presumptuous communication of ideas. But I have accepted this term because I do hope to capture some of the clarity of vision that the genre can, at its best, convey.
All “artificial intelligence” is originally human intelligence.
As the computer scientist and technologist Jaron Lanier has argued, artificial intelligence is best understood as “an innovative form of social collaboration”, not as an exotic species of nonhuman intelligence. What generative AI systems produce, while their end products may be different from any of the data that helped train it, is an elaborate coordination—itself the design of human software engineers—of a huge number of existing productions of the human mind. Ultimately, AI is human intelligence recombined to such an extent that we often no longer recognize it as human. The less we acknowledge this disguised humanity and persuade ourselves that human intelligence has managed to give life to something wholly alien, the more we will be inclined to misunderstand what is at stake in this conversation. If we keep our gaze fixed on the fundamental humanity of artificial intelligence, our conversations will tend to orbit humane considerations; if we persuade ourselves that artificial intelligence is a novel and alien lifeform, the human becomes a transitional figure, and we will soon find ourselves asking what we can do to accommodate it.
New technologies suggest metaphors that liken them to already familiar concepts, or alter our understanding of some part of our world. These metaphors shape the connections between concepts and facilitate certain conclusions.
Indeed, I would contend that the term “artificial intelligence”—first used in 1956 by the computer scientist John McCarthy—is itself best understood as a metaphor for the human intelligence which constitutes its training data and which it has been designed to mimic.
Perhaps the most important (and very much still living) example of a technological metaphor imbuing our language is the early modern interpretation of the totality of nature as a machine—something which is the product of intentional design and fabrication. The famous opening lines of Hobbes’ Leviathan of 1651 capture the mechanical metaphor as well as anything could:
Nature (the art whereby God hath made and governes the world) is by the art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. For seeing life is but a motion of Limbs, the begining whereof is in some principall part within; why may we not say, that all Automata (Engines that move themselves by springs and wheeles as doth a watch) have an artificiall life? For what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer?
Hobbes’ interpretation of the machine-like qualities of nature in turn facilitated a mechanical view of politics and society which contributed to the origins of the tradition of modern liberalism—a much-argued issue which I won’t pursue here. The point is that the metaphors we employ when we think about our technologies inevitably make certain conclusions easier to arrive at than others, and when we use a metaphor, we are not in control of the conclusions that others may reach by means of the language we ourselves have endorsed.
The metaphors implicit in the words used to describe what computing machines could be made to do—learning, memory, thinking—have been in use since the earliest phases of artificial intelligence research. By now, metaphors likening the human brain and mind to computer “hardware” and “software” have been a part even of the layperson’s vocabulary for decades—and not without consequence. The two metaphors that now compete to define the character of AI in our collective imagination are the creature and the tool. Our inability to decide between these two alternatives reflects a deeper indecision about whether, in building it, we have simply extended the ancient practice of toolmaking, or crossed some spiritual threshold that has given birth to artificial life. I would suggest that we make a virtue of preserving our metaphors as metaphors, and resist the temptation to view them as straightforwardly descriptive of reality: let us not forget to question what we mean when we say that machines “learn”, “reason”, “understand”, or have “goals”, or when we talk about artificial “minds”, “brains”—or, yes, “intelligence”.
New technologies tend to suggest new inferences that inform how we understand them and the world in which they come to be employed.
The appearance of a new technology often suggests an argument that is not made explicitly, but which leads to a conclusion that comes to seem obvious. We don’t always follow these inferences blindly, but they do shape the conversation about the technology in question. Perhaps the most general inference that emerges along with almost every new technology is the normative assertion: “Now that we have this technology, we should use it.” The burden of persuasion now shifts to the skeptics, who previously had infeasibility on their side, but who can now only advise restraint towards a technology whose implications may still be poorly understood.
There are plenty of examples of inferences that are more closely tied to specific technologies. When it comes to artificial intelligence, we are often tempted by this one: “If the powers of the human mind can be surpassed by machines, then humans must be less significant—and machines more significant—than previously thought.” And another: “If it is found that artificial judgment is more rational and objective, and less biased, then it follows that it must be a superior substitute for human judgment.” Or yet another: “If artificial intelligence is capable of solving technical problems or even making new scientific discoveries that could not practically be made by humans working alone, then we should turn more of these processes over to AI, even when we don’t fully understand what it is doing, or how”.
We should understand the utopian scenarios made possible by artificial intelligence not as a gift, but as a trade.
Like any gift, accepting a utopian political or social arrangement made possible by artificial intelligence means losing what it replaces or displaces. Such a forfeiture may be welcome; it may mean we can dispense with certain conditions of life as we have known it that are genuinely oppressive and diminishing. But what AI offers is so protean and its trajectory so uncertain that this calculation cannot be reckoned in any straightforward way. Given these ambiguities, it is far from clear that the utopian upsides that artificial intelligence might make possible—a world in which human labor, or rather all human endeavor, becomes unnecessary and optional—are certain to create a world of human flourishing.
But more to the point, we are right to question the desirability of what might even look like successful outcomes of AI’s unfolding: a fully automated world of material abundance and algorithmic efficiency managed and delivered by artificial intelligence. There is no separating an imagined utopia from human disempowerment in one form or another. Our earthly condition, such as it has been given to human beings, is not an unmixed blessing: it consists of everything from our strenuous daily labor to keep and maintain order in the world, to the never-ending political negotiations that determine the course of history—in short, everything that gives life structure and meaning. All this is the burden we bear as a condition of our agency, and we can have a utopia only to the degree that we are prepared to dispense with it. The more clearly we have this tradeoff in focus, the less seductive we will find the grandiose promises of a world automated by AI.
The task of liberal education—that is, the development of the intellect and imagination of students—is not advanced by the ability to outsource thought to artificial intelligence.
Whatever artificial intelligence can provide in the way of sheer intellectual power which may be harnessed to accomplish some task or other, it cannot be an aid to endeavors that, by their very nature, must originate and be sustained within the individual person. Liberal education has always rested on the premise that we are shaped by that to which we give our attention, and not simply that it puts us in possession of useful information which we will have to call upon in the performance of some future duty or role. Liberal education, to quote the American educator Eva Brann, is “for learning to be a human being, whose powers of thought are well exercised, whose imagination is well stocked, whose will has conceived some large human purpose, and whose passions have found some fine object of love about which to crystallize.” The mediating role of LLMs, which intervene between students and the books they read, their conversations with colleagues, and even disrupts the relationship to oneself by undermining the native power to seek truth and articulate thought by means of the written word, assists us in none of this.
I’ll close with some words of Machiavelli, who is representative of European intellectuals of his time in looking to the tradition of classical learning for guidance in interpreting the political events that surrounded and implicated him. Here is an excerpt from his letter to Francesco Vettori, 10 December, 1513. Do we recognize the state of absorption Machiavelli describes in this passage, or does it sound to us more like a vestige of an older technological configuration?
On the coming of evening, I return to my house and enter my study; and at the door I take off the day’s clothing, covered with mud and dust, and put on garments regal and courtly; and reclothed appropriately, I enter the ancient courts of ancient men, where, received by them with affection, I feed on that food which only is mine and which I was born for, where I am not ashamed to speak with them and to ask them the reason for their actions; and they in their kindness answer me; and for four hours of time I do not feel boredom, I forget every trouble, I do not dread poverty, I am not frightened by death; entirely I give myself over to them.