Welcome to the Machine
Modernity has its own monsters
Welcome to Ever Not Quite—essays about technology and humanism.
This essay attempts to work out what we can learn from thinking about AI less as a ‘creature’ or a ‘tool’—the two metaphors most commonly used to describe it—and more as a system for social collaboration with antecedents reaching back several centuries. It revolves around the English philosopher Thomas Hobbes and his Leviathan, first published in 1651. This may seem like a surprising place to look, but there is a through-line here which we can trace backwards from the algorithms taking shape today to the political theory which was first articulated in the seventeenth century. I hope it offers a deeper understanding of our technological moment.
Welcome to those of you who are new here. I invite you to look through the archive of past essays, all of which were written with the intention of saying something to outlast the moment of their first publication.
As always, thank you for reading. If you’d like to support my work, consider subscribing and sharing this essay around within your circles. It helps more than you may know.
“Upon earth there is not his like, who is made without fear.” Job 41:33
“Moloch whose mind is pure machinery! Moloch whose blood is running money! Moloch whose fingers are ten armies!” Allen Ginsberg, “Howl”
“Jonah, because he did not despair of being heard from the belly of the monster that had swallowed him, was able to quit the monster’s belly.” Origen
There’s a certain way of talking about AI that we’re all accustomed to hearing by now. The technology will often be compared to some of the most seminal inventions in human history: fire, writing, or even language itself. Google CEO Sundar Pichai, for instance, once offhandedly suggested to an audience at the annual meeting of the World Economic Forum that the invention of AI is “more profound than fire or electricity”. These comparisons never rest on any technical resemblance between the underlying mechanics of machine learning and these older technologies, but depend solely on their shared revolutionary quality: each invention marks the boundary between a before and an after, with apparently no possibility of turning back. And perhaps these comparisons are not without justification; with the release of each impressive new model, we’re confronted with another unsettling reminder that this is the worst AI will ever be.
But I find myself wondering whether these sorts of comparisons with earlier pivotal technologies are not helping to conceal something. The problem is that they treat AI as though it has come from nowhere, or as if, like fire, we had simply stolen it from the gods on Mount Olympus. It is wiser, I think, to assume that history is, in fact, continuous.
We do somewhat better when we situate AI within the long history of science and technology, viewing it instead as the fateful confluence of computer science and mechanical calculating machines. From this perspective, we might trace its history back to developments in information theory in the 1940s and earlier, with technical antecedents reaching back as far as Charles Babbage’s difference engines in the early decades of the nineteenth century, which first proposed, in the words of one contemporary journalist, “[t]o throw the powers of thought into wheelwork”.1
There’s nothing wrong, exactly with this sort of story. But I want to consider a different historical trajectory which has gotten us to where we now find ourselves: we can learn more about what AI is and how best to think about its consequences, I want to suggest, when we situate it outside of the history of technology altogether—at least in the usual sense—and place it instead within the story of social and political organization.
Writing in The New Yorker in 2023, the technologist and writer Jaron Lanier argued, contrary to some of these more customary narratives, that AI is best understood as “an innovative form of social collaboration.”2 This framing may come as a surprise; while it’s pretty obvious that the internet is a social technology of some kind, AI seems like something rather different—less a form of collaboration than a new kind of entity, however artificial it may be. But “[s]eeing A.I. as a way of working together, rather than as a technology for creating independent, intelligent beings”, Lanier suggests, “may make it less mysterious”. We’ll see about that. But at the very least, it can reveal something which isn’t otherwise obvious.
Lanier’s point here is that the decisions which algorithms are capable of making and the media they are capable of generating are products of the massive stores of information to which we have all unwittingly contributed. When an LLM produces a bit of text, for example, it has reshuffled an unfathomable quantity of data which was originally the work of human authors. This original human touch which underlies its outputs is the lifeblood that gives algorithms the illusion of intelligence in the first place: gathering all the data representing the collective judgment and deliberation of human being-in-the-world, algorithms remix it in ways that suggest that they too share our conscious and responsive relationship to real things. I didn’t write that poem which you had ChatGPT produce for you, but it’s possible that my comment on a Reddit thread from ten years ago did play some role in shaping what the algorithm came up with. The sheer quantity of data at work here is so vast that it’s impossible to say with much more precision than that. All this is well known.
We have long spoken of the ways that groups of people can be smarter than their individual members, and the language we use to speak about this is much older than modern algorithms. We appeal, for instance, to the ‘wisdom of crowds’ every time we convene twelve people to form a jury on the theory that the group’s decision is likely to be more just than that of any individual. The consensus which a jury ultimately reaches cannot really be attributed to any individual juror; if it could, we would save a lot of trouble by dispensing at the outset with the other eleven members. Instead, we have agreed that a jury’s decision possesses an authority which the vote of any single juror lacks—and this is precisely because it is a group decision. The idea that we might be able to come up with some good answers to our questions by consulting and coordinating the collective data of our entire digital legacy is an extension of this time-honored insight. But we have also learned to be wary of what we call ‘mob mentality,’ which can lead otherwise sensible people to behave senselessly under the influence of a group, and the dangerous possibility of what we have come to call ‘groupthink’ shadows any attempt to establish consensus. Every time a new collective is formed, both promise and peril may lie in wait.
Having said all this by way of introduction, then, I want to consider what it means that AI is a form of social collaboration which has appeared at the end of a long succession of modern political technologies. What these forms of social collaboration have in common is that it might be said of each that the collective entity which comes into existence when a new group is formed exhibits two emergent properties: it is able to do things which its individual members are not—and this includes the ability to make certain kinds of decisions with an authority which only a group can acquire—and it possesses its own identity apart from the identities of the individuals who constitute it. These properties—its power to act and its distinct identity—are why we build collaborative systems in the first place. The three types of collective entities I’m referring to are modern states, corporations, and algorithms. We can call all of these artificial super-agents.
When we trace the origins of AI backwards with its collaborative dimension in mind, viewing it less as a technical phenomenon than as a form of collective organization, we find ourselves returning to the middle of the seventeenth century, when the theoretical foundations of the modern state were first established. This connection between modern politics and technology should not come as too much of a surprise. Early modern thought—political theory no less than natural philosophy—teems with the imagery and metaphor of mechanism; from René Descartes’ mechanical philosophy, which conceived of the universe in mechanistic terms rather than through concepts like ‘purpose’ or ‘quality,’ to Julien Offray de La Mettrie’s Man, A Machine, which applied the same approach to human beings, once both nature and humanity had been conceived in mechanical terms, it made sense also to reimagine political life along similar lines.3
Often enough, this was not intended as metaphor at all. The opening passage of Thomas Hobbes’ Leviathan—the first truly modern treatise in political theory, which appeared in 1651—depicts the state in terms that, today, call to mind an enormous robot, although that word wouldn’t be introduced until 1920. Opening its pages, the reader is first met with its extraordinary frontispiece which depicts the colossal figure made up of innumerable individuals, their backs all turned to the viewer as they gaze upwards toward the face of the formidable figure which binds them together. Turning the page, we find the famous declaration which opens the book: “Nature (the art whereby God hath made and governes the world) is by the art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal.” The state, as Hobbes envisions it, is thus a kind of automaton: a “Mortall God” animated by an “artificiall life”.

Working from the assumption that the human body is already a kind of organic machine made up of constituent parts, Hobbes suggests that the body may itself become a part in a still larger machine—the state—which is constructed for the purpose of defending the biological lives out of which the state itself is made. Here is how he elaborates a little further down the same page:
For what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer? Art goes yet further, imitating that Rationall and most excellent worke of Nature, Man. For by Art is created that great LEVIATHAN called a COMMON-WEALTH, or STATE, (in latine CIVITAS) which is but an Artificiall Man; though of greater stature and strength than the Naturall, for whose protection and defence it was intended; and in which, the Soveraignty is an Artificiall Soul, as giving life and motion to the whole body
It’s worth lingering for a moment on the way Hobbes sets out, because it prefigures so much in our contemporary discourse about AI. To talk about AI at all is to be tempted by any number of metaphors offered up by the imagination to help us better understand it. Over the past few years, we have often heard two competing metaphors for how best to conceive of this new digital form of social collaboration: sometimes it is depicted as a tool, an instrument for better achieving our purposes, something to be picked up and set down according to our own intentions, and which never loses its distinct identity as an object separate from its user; just as often, though, it is spoken of as a new kind of creature, a novel sort of intelligent entity which comes out to meet us, a being which we must encounter, address as Other, and with which we must eventually learn to co-exist.
Notably, Sam Altman has insisted on the ‘tool’ framing. And this makes sense, considering how it minimizes the medium’s power to restructure economic, social, and political life, and redirects responsibility away from those who have built it and back onto its individual users whose choices are more directly responsible for its outputs. Clearly AI is a tool in the sense that we can use it to do things as we would any other tool. But unlike a tool, its agency cannot wholly be located in the individual who wields it, because the decisions it makes and the media it creates are not really the work of its user at all: they are products of the storehouse of data on which it has been trained, and no one of us is really responsible for what it does. “We’re sorry”, your bank might tell you. “But the computer has denied your loan application.” When you hear this sort of obfuscation, you must begin to doubt that the algorithm is really just a ‘tool’ which the bank uses to help it make decisions; you now start to suspect that the instrument has been promoted to manager. It is not in the interests of OpenAI—or any other AI company—to call attention to this reality.
But there are also good reasons to resist the ‘creature’ language, a way of talking which suggests that it possesses a degree of sentience, together with the corresponding moral status which this might entail. This was the error of Blake Lemoine, the Google software engineer who made public claims in the summer of 2022 that its LaMDA model had achieved sentience, later calling it an “alien intelligence of terrestrial origin”, which was being subjected by human beings to what he called “hydrocarbon bigotry”. Meeting, as it does, the definition of a legal person, Lemoine explained, LaMDA should be freed from involuntary servitude under the 13th Amendment of the U. S. Constitution.4 Those who cultivate a more skeptical attitude and resist this sort of misplaced anthropomorphism recognize that the ‘creature’ language invites all sorts of unwelcome implications.
But when Hobbes calls the state an ‘Artificial Animal,’—a ‘robot,’ as we might say today—he doesn’t intend any of this; he doesn’t mean that the state possesses the subjective inwardness of a sentient creature, but that—in much the same way as a jury—the state acquires an authority and an identity of its own because its actions emerge out of the collective activities of a group and therefore cannot be identified with any particular citizen. And this is by design. Hobbes thought there was one very good reason to construct this super-entity, this Leviathan: to escape the fear that beleaguers our natural, pre-political situation—the fear of suffering a violent death at the hands of a lawless multitude.5 The “War of every man against every man” which afflicts our natural condition, and renders life, in Hobbes’ indelible words, “solitary, poore, nasty, brutish and short”, could reach an armistice only once individuals have agreed to join together in a political covenant, to become citizens by placing above themselves a collective authority whose sole purpose is to keep the peace. That authority is the state.
Leviathan, of course, is the name of the biblical sea monster, an awesome creature which is inoculated against the fear of any other by virtue of its size and ferocity. Hobbes’ political Leviathan acquires these same qualities by binding its citizens into a single monstrous machine capable of protecting the lives of all. Together, we create a collective, durable, and artificial life in order to preserve our own individual, fragile, and natural lives. Having done this, a new artificial super-entity has been constructed—the ‘Leviathan.’ It’s important not to overlook the conceptual circularities here: Leviathan is a peculiarly creature-like machine made by human beings in order to protect organic life—and in order to conceive of it, we had first to reinterpret life itself in machine-like terms.
Of course, states today are expected to do more than simply protect their citizens from violent death at each others’ hands. But protecting our biological lives remains their principal mandate, and Hobbes was the first to conceive of the purpose of a political community in these terms. The many responsibilities of the contemporary bio-political state, with its health agencies, immunization schedules, administrations for the approval of food and drugs, and funding of medical research, can be traced to his legacy. Indeed, the tradition of modern liberalism itself rests on the idea that you must decide for yourself what constitutes a good life—but that in order to choose any sort of life at all, you will need to be alive, and preferably healthy. Tending to the life-process by ensuring that the needs of its citizens’ individual bodies are met—and not endorsing any particular set of transcendent values—is the modern liberal state’s primary duty. If you have ever wondered what the post-liberal political theorists who have recently become so influential are opposing, exactly, this is it.
Further structure would still need to emerge, of course, before modern constitutions and administrative bureaucracies would come fully into view. For the most part, these enshrine three co-equal branches of government: a bicameral legislature with an upper and lower chamber, an executive sitting atop a bevy of federal agencies, and a supreme court to adjudicate between competing interpretations of the constitution. But the basic premise was already there in Hobbes: that a composite body of like and equal citizens are gathered together and organized under the auspices of the state in such a way that their basic needs and rights as individuals would be protected.
No one person is wholly responsible for the state’s actions, and no person has total control over it; the identities of its political leaders are, to a large extent, subsumed by their role in the administrative bureaucracy which they happen to head. As Alexis de Tocqueville wrote in the margin of a copy of his Democracy in America, the genius of the U. S. Constitution is that “[t]he man or class that puts the administrative machine in motion can change without the machine changing.” And later in the nineteenth century the American poet and critic James Russell Lowell would call the Constitution “a machine that would go of itself”.6
And this is the crucial bit, the reason it makes sense to call the modern state a machine: What makes it distinct from earlier forms of political organization is this de-personalization of political authority, together with its largely prefabricated structure; like any other machine, modern constitutions form a template that can be mass-produced with relatively minor differences over and over anywhere in the world. This is how the modern state has managed to go global just a few centuries after it was first devised.
Of course, it isn’t simply a matter of indifference who fills its offices. Politicians are more than mere officiants overseeing processes which will play out the same way regardless of who is in charge. Their functions are more than mechanical, and human judgment still matters a great deal. Indeed, elections are won and lost on the basis of how voters expect candidates will exercise their distinctively human character. But modern states have insulated themselves, for better or worse, from the unique qualities of the people who work inside them, normalizing the exercise of power by reigning them in and maintaining a quasi-mechanical mode of operation.7
Hobbes himself doesn’t seem to have made up his mind about what, exactly, the Leviathan is: A mortal god? A giant man? A huge machine? The language he uses can support any of these interpretations.8 It has some properties of both a creature and a tool. In the figurative sense, it is a creature because it possesses its own identity and its own internal source of motion. It is a tool in the sense that it is a mechanism for promoting security through collective strength and coordination. If an organism is a kind of machine, and a machine is a kind of tool, then an organism, too, can be said to possess the characteristics of a tool. The metaphor confusion which pervades Leviathan survives in our own indecision about how to talk about algorithms which we have also created—on a generous interpretation—for the purpose of improving the conditions of our lives.
Hobbes reversed the earlier medieval view which had the political community occupying its proper place within the larger hierarchy of divine creation. Now, the state was the work of human inventiveness, a power that imitates the creative genius of God by remaking the given world and devising the Homo artificialus—a gigantic mechanical ‘person’ which serves to protect our smaller, biological lives. The existence of the political community is thus no longer assumed to be given as a part of our nature as it had been since at least Aristotle; it is now the solution to a problem—a problem with human nature itself which is solved by human ingenuity.
Looking back from the vantage of the nineteen thirties, the German political theorist Carl Schmitt would later call the Leviathan as set down by Hobbes “the first product of the age of technology, the first modern mechanism in a grand style”.9 “With that state”, Schmitt continued, “was created not only an essential intellectual or sociological precondition for the technical-industrial age that followed but also the typical, even the prototypical, work of the new technological era—the development of the state itself.”10 Yes, Schmitt was a Nazi, but on this point, at least, his reading is perceptive. The state was the first modern mechanism not only in the sequential sense, but also because it created the conditions within which there might arise subsequent species of artificial beings built on the same principle: namely, corporations and algorithms.
The political theorist David Runciman has named the proliferation of modern states and corporations which has occurred since the Industrial Revolution the “First Singularity”.11 “We are not living through the Anthropocene”, he declares. “This is the Leviacene.” Perhaps more than anything, this innovation is what has made modernity distinct from every other period in history. In his 2023 book The Handover: How We Gave Control of Our Lives to Corporations, States and AIs, Runciman argues that the theoretical AI-driven “Singularity” of the future, so often contemplated in tech circles, must be understood in terms of the earlier revolution that was first set into motion, at least in theoretical terms, by Hobbes.12 “The Singularity”, it turns out, “is not singular.”13
“The Leviathan inaugurates the world of modern artificial persons in two senses”, Runciman argues: “as an artificial person itself, and as the creator of artificial persons. The machine gives rise to the possibility of many more machines”.14 Hobbes had designated corporations “lesser Common-wealths in the bowels of a greater”—smaller mechanisms originating from and contained inside the state that sanctions them. In American law, Chief Justice John Marshall would later pronounce the corporation “an artificial being, invisible, intangible, and existing only in contemplation of law.”15 It is a legal, composite person, born out of statute and exempt from the limitations on its size and life-span which nature imposes on all organic life. Like the Leviathan, the corporation is a structure made out of living people whose coordinated action serves the interests of its owners. In theory, its activities help to advance the state’s mandate to protect life by providing for our needs more efficiently and on a larger scale than is practical by state action alone. But it has also perfected the art, not just of manufacturing new needs in individuals, but also of exercising its outsized control over the state which has granted its existence and within which it operates.16
The mechanical character of states and corporations has always been tempered by the fact that they are constructed out of people, even if the actions and judgments of those who work inside them are often hampered by bureaucratic procedures and legal restrictions. Their behavior may strike us as inhuman, but they cannot operate at all without the presence of the real humans who make them run. Algorithms have shed this dependence on flesh and blood; the human element that remains is not derived from their inclusion of human actors, but of mere fragments, the tiny shards of data scraped from the devices which real people have interacted with. It thus becomes even more plausible to attribute to algorithms a distinct identity—not sentience, but the ability to act and to decide—precisely because their raw material consists not of many smaller identities, but only of the traces which those identities have left behind.
In 1997, the historian of science and technology George Dyson remarked that the landscape of computing at the close of the millennium was changing in ways that echoed the formation of the modern states and corporations which had emerged in the preceding centuries:
Three centuries after Hobbes, automata are multiplying with an agility that no vision formed in the seventeenth century could have foretold. Artificial intelligence flickers on the desktop and artificial life has become a respectable pursuit. But the artificial life and artificial intelligence that so animated Hobbes’s outlook on the world was not the discrete, autonomous mechanical intelligence conceived by the architects of digital processing in the twentieth century. Hobbes’s Leviathan was a diffuse, distributed, artificial organism more characteristic of the technologies and computational architectures approaching with the arrival of the twenty-first.17
States and corporations, of course, are seated in their capitol buildings and corporate headquarters, but their reach is not restricted to those locations; their arms are spread wherever they oversee and direct activity of any kind. But until only recently, computer code really was confined to the discrete hard drives where it was stored. The personal computing revolution of the 1970s and 80s which undid the centralization of computing infrastructure in earlier twentieth century industry giants like IBM changed this, and created the possibility of the new algorithmic Leviathan which is emerging in earnest only today. Just as, for centuries, geographically dispersed individuals have been bound together as citizens in the political Leviathan by means of their tacit consent to be governed, today’s user-data, which is supplied by a distributed swarm of personal computing devices, is bound together into modern algorithms by means of the connectivity made possible by the internet and by the tacit consent of their users to contribute their personal information.
Hobbes was under no illusion that he had designed a paradise on earth. He understood that the modern state is nothing more than the deeply flawed alternative to the most fearful fate which can befall human beings: to be ruled by the violence inherent in our unaided human nature. But as these artificial agents have proliferated and the modern world has marched on, the project of reinventing the human condition by joining together into artificial superagents has taken on a more perfectionist character.
Today’s techno-utopians offer AI as a potential solution to seemingly every human problem: not only will it preserve biological life by means of discoveries in health and medicine,18 but it will also make better decisions than any which we alone are capable of. The algorithmic Leviathan dresses in new clothes, but its work carries forward the functions of its earlier manifestations.
And those earlier forms are still around. One of the most important themes which will emerge in the coming years will be the ways that these modern superagents will hybridize with each other: states gave rise to corporations, which have in turn developed AI. Soon, governments and corporations alike will be at least partly run by algorithms, rather than by human administrators and bureaucrats. The helping hand of AI will make decisions, keep watch over life and property, generate media content, and manage day-to-day operations like hiring, performance review, and workflow. At the same time, these institutions will continue to supervise its work, refining its outputs and capabilities to better suit their needs. It is they who have created it, after all, and it is they—at least for now—who control it. “We are going to be living in a world of human-like machines, built by machine-like versions of human beings”, David Runciman writes. “It’s not a question of us versus them. It’s a question of which of them gives us the best chance of still being us.”19
There’s a famous passage towards the end of Alexis de Tocqueville’s Democracy in America, first published in two installments in 1835 and 1840, which I keep thinking about. Tocqueville was writing about the American experiment in democracy, not yet six decades underway, and which he, a Frenchman, had the rare opportunity to observe up close over nine months during a tour of America’s prisons and penitentiaries. I want to quote Tocqueville at some length here, and maybe you’ve come across this passage before. He is describing what was then a uniquely American form of government and the soft despotism into which he feared it might develop. But when you read this, push all that aside. Imagine instead that Tocqueville is writing not about American democracy, but about the content-generating and decision-making algorithms which will increasingly come to shape a people already molded by the earlier phase of Runciman’s First Singularity:
I want to imagine under what new features despotism could present itself to the world; I see an innumerable crowd of similar and equal men who spin around restlessly, in order to gain small and vulgar pleasures with which they fill their souls. Each one of them, withdrawn apart, is like a stranger to the destiny of all the others; his children and his particular friends form for him the entire human species; as for the remainder of his fellow citizens, he is next to them, but he does not see them; he touches them without feeling them; he exists only in himself and for himself alone, and if he still has a family, you can say that at least he no longer has a country.
Above those men arises an immense and tutelary power that alone takes charge of assuring their enjoyment and of looking after their fate. It is absolute, detailed, regular, far-sighted and mild. It would resemble paternal power if, like it, it had as a goal to prepare men for manhood; but on the contrary it seeks only to fix them irrevocably in childhood; it likes the citizens to enjoy themselves, provided that they think only about enjoying themselves. It works willingly for their happiness; but it wants to be the unique agent for it and the sole arbiter; it attends to their security, provides for their needs, facilitates their pleasures, conducts their principal affairs, directs their industry, settles their estates, divides their inheritances; how can it not remove entirely from them the trouble to think and the difficulty of living?
This is how it makes the use of free will less useful and rarer every day; how it encloses the action of the will within a smaller space and little by little steals from each citizen even the use of himself. Equality has prepared men for all these things; it has disposed men to bear them and often even to regard them as a benefit.20
To my mind, this picture of isolated passivity is far from the only worry about a future run by algorithms which have been optimized to advance state and corporate interests. It is, in fact, a best case scenario in the sense that the benign satisfaction of all our needs represents the perfection of the systems which we have devised to ease the burden of living. But Tocqueville’s worry seems even more plausible when it is applied to some future hybrid of state, corporate, and algorithmic power than to the political culture which he was actually studying: this assemblage may be so successful in providing the “small and vulgar pleasures” that it loosens our grip on our own autonomy, overrides any need to cultivate the character, and atrophies our sense of the higher purposes which are ours to pursue.
But the deeper structural danger is that the collective systems by which we secure these dubious advantages are always at risk of eluding our power to control them precisely because they are more powerful than we are. That’s why we built them, after all.
Consider the role of modern superagents in your life. Maybe it doesn’t feel like AI yet controls much about the options available to you, especially if you’ve never—or at least not as far as you know—been fired, hired, denied a loan or parole, or been criminally prosecuted as a result of an algorithm’s decision, let alone found guilty by an AI trained to render the justest possible verdict after being fed the facts of your case. And perhaps we never will invite AIs to do the work of a jury. But we can already hear the arguments that will be made in favor of doing exactly that: isn’t a jury just a group of people presented with the facts and the law? If the legal system is willing to place the balance of justice in the collective hands of a group of twelve strangers, why wouldn’t it be even better to use data contributed by every citizen to help decide the outcome?
If you’ve ever felt disempowered by the distant machinations of your government which has exceeded your ability to exercise meaningful control over it, or if it has ever looked to you like a corporation was conducting its business in ways that visited abuses upon nature, its workers, or any of the people affected by its actions, there’s good reason also to worry about what will happen when your life comes under the ever-more minute control of algorithms, however well-aligned these might be said to be with ‘human values.’ To worry, as many do, that AI will escape human control some time in the future is to overlook the fact that we already share the world with artificial superagents, and have done so for centuries; such is to live in the modern age. To talk of the alignment problem is a way of acknowledging that the systems which we build in order to make our lives better can also make them worse in unforeseen ways. Whatever we call it, the truth is that we have been living with the alignment problem ever since we first made modern states and corporations—and after all these centuries, we have never managed to solve it.
In 1966, long before the new algorithmic Leviathan began to rear its head, the anthropologist Loren Eiseley observed:
Increasingly there is but one way into the future, the technological way. The frightening aspect of this situation lies in the constriction of human choice. Western technology has released irrevocable forces, and the “one world” that has been talked about so glibly is frequently a distraught conformity, produced by the centripetal forces of Western society. So great is its power over men that any other solutions, any other philosophy, is silenced. Men, unknowingly, and whether for good or ill, are making their last decisions about human destiny. To pursue the biological analogy, it is as though, instead of many adaptive organisms, a single monstrous animal embodied the only organic future of the world.21
We are, as it were, in the belly of this beast. Eiseley wasn’t thinking about AI here, but he may as well have been. He wrote this during a time when modern states and corporations had already succeeded in wrapping the globe in technologies of mass transportation and communication, creating what Marshall McLuhan had dubbed the “global village” just a few years prior. Today’s algorithms, which promise to perfect the work begun by states and corporations to protect us as individuals and to provide products and services to satisfy human needs, represent the maturation of the “single monstrous animal” which was first theorized by Hobbes in the seventeenth century.
Jaron Lanier had urged us to understand AI as an innovative form of social collaboration in order to make it seem less mysterious. But such systems have always been explained in ways that introduce complexities of their own. The entangled totality of modern artificial superagents has been described over the centuries in many different terms: god, human, animal, machine, to name a few.22 Hans Jonas called it “the total artifact”.23
But perhaps the biological analogy still doesn’t sit right with you. It may seem that too close an identification with the organism risks re-mythologizing artificial super-agents, turning them back into the beguiling creatures whose mystery we were attempting to chase out. I’ll just point out that when you trace their etymology, the words machine and organism do reveal a surprising semantic overlap, with the former deriving from the ancient Greek word for ‘contrivance’ or ‘device,’ while the latter descends from the word for ‘instrument’ or ‘tool.’ The stark distinction which they have acquired in modern English isn’t as obvious in their roots as we might expect, and our word ‘organism’ only began to be applied strictly to biological systems in the early nineteenth century. If you’re uncomfortable with the ‘organism’ metaphor, call it, as many have taken to doing, the Machine.24 Whatever language we choose will of course be capable of misleading us, but that doesn’t mean we aren’t naming something real.
The world we’re heading into will not involve an encounter with some terrestrial alien whose interests we will need to learn to respect and accommodate; you don’t really encounter a system of which you yourself are already a part. The questions which AI will raise will not be about our relationship with something else; they will be political and social because we enter into the perplexities of politics and society only from inside the very structures which are at issue.
To the extent that we are uncomfortable with the ways that modern artificial superagents have claimed us within the logic of enclosure, privatization, and commodification, the program is one of resistance and of escape—not, to be sure, a physical escape, but the sort which resists the means by which we are shaped by the systems which have swallowed us.25 This may sound like a paranoid fantasy, but it is only a recognition of a political and technological reality. We are fortunate enough to be on the receiving end of long traditions of resistance to state and corporate authority to which we might look for instruction. For now, I leave it to others to suggest practical action which might make sense under these circumstances.
When we talk about the world’s intersecting crises—the many ways our very successes have come at an astonishing cost to the earth’s natural systems and seem to diminish our own horizons into an isolated passivity—it’s impossible not to notice that we have reached a pass where any retreat from our present course has become almost unimaginable. When we remind each other uneasily that today’s AI is ‘the worst it will ever be,’ what we’re really saying is that we know we can’t go back, but we can’t really imagine what it will mean to go forward either. But it should also bring to mind earlier transformations which have already taken place, calling attention to the much longer story of which AI is only the most recent chapter. If we want to understand more clearly where we now find ourselves, a good way to do that is to study that story and the ways others have responded to its earlier phases, and perhaps to rediscover some of the threads which were dropped along the way.26
The reference is to Dionysus Lardner, a science correspondent for the Edinburgh Review, who wrote a report in 1834 on Babbage’s work.
Jaron Lanier’s essay was the first to reach me, but similar arguments have recently been made by Henry Farrell.
So significant is fear in Hobbes’ life and work that when he later wrote an autobiography in verse, he told of how his birth in 1588 had been overshadowed by the specter of the Spanish Armada off English shores: “For Fame had rumour’d, that a Fleet at Sea, / Wou’d cause our Nations Catastrophe; / And hereupon it was my Mother Dear / Did bring forth Twins at once, both Me, and Fear.”
This famous phrase is from Lowell’s 1888 address “The Place of the Independent in Politics”.
Under-developed sidebar on this point: Consider the importance of character to ancient Greek and Roman political theorists and historians. Because ancient political structures were not machines in the modern sense, they tended to reflect the unique qualities of their leaders; they benefited from their leaders’ virtues, just as they were subjected to their vices. It really mattered whether the Roman Empire was ruled by a Marcus Aurelius or by a Commodus, because the Empire was not so insulated from the personal foibles of the person at the top. Ancient politics didn’t come with what we now call ‘guardrails’ which insulate political institutions from the character of their leaders. Consequently, the study of personal character—of the virtues and vices of political leaders—was understood to be paramount in understanding political matters.
Following a well-known meme, Henry Farrell uses the figure of the ‘shoggoth’ from H. P. Lovecraft’s mythopoeia as a metaphor for AI. The shoggoth represents a subservient intelligence which turns out to be untrustworthy on account of its alienness. In 2022 people started calling AI a “shoggoth wearing smileyface”. However, I would suggest to Henry, if you’re reading, that the more illustrative symbol through which to represent AI as a social technology is the Leviathan, which emphasizes its composite structure rather than its inscrutability and unreliability.
Carl Schmitt, The Leviathan in the State Theory of Thomas Hobbes (1938), pg. 34.
Ibid.
David Runciman, The Handover: How We Gave Control of Our Lives to Corporations, States and AIs, pg. 8. The present essay is heavily indebted to Runciman’s book.
Runciman is also the host of the excellent podcast Past, Present & Future, which features what I think of as ‘fireside chats’ about about interesting topics, authors and books, usually with an eye to drawing out their political implications.
If you’re wondering, this is set to occur in 2045, according to the prognosticator-in-chief, Ray Kurzweil. But recent Silicon Valley consensus has moved this up to some time during the next several years.
Alternative to Runciman’s “First Singularity” framing, Henry Farrell suggests what he calls a “Slow Singularity”, beginning with the Industrial Revolution and unfolding over the course of the next century.
David Runciman, The Handover, pg. 38. Italics mine.
Chief Justice John Marshall, Trustees of Dartmouth College v. Woodward (1819).
Controversially, the psychologist Robert D. Hare and the filmmakers behind the 2003 documentary The Corporation argued that the modern corporation, if it can indeed be treated as a single personality, would meet the clinical criteria for psychopathy.
George Dyson, Darwin Among the Machines: The Evolution of Global Intelligence, pg. 2.
Whatever its critics might say, the promise of AI which few dare to challenge is its alleged potential for curing diseases and developing new drugs and therapies. Like Hobbes’ Leviathan, which requires that its citizens tolerate all sorts of abuses in the name of collective peace, the architects of the algorithmic Leviathan ask that we authorize all of its attendant dangers on the strength of its promise to protect and prolong our biological lives. Thanks to Nik Prassas for pointing this out.
David Runciman, The Handover, pg. 10.
Alexis de Tocqueville, Democracy in America (Historical-Critical Edition of De la démocratie en Amérique). Edited by Eduardo Nolla. Translated by James T. Schleifer, pg. 1249-51.
Loren Eiseley, “Science and the Unexpected Universe” in The American Scholar, Summer Issue, 1966. Later republished in Loren Eiseley, The Unexpected Universe, 1969. Eiseley is truly a wonderful writer, though sadly not often read today. Do read him if you have the opportunity. Here’s a link to a contemporaneous Time Magazine review.
See Meghan O'Gieblyn’s excellent 2021 book, God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, which is mostly about AI.
Hans Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, pg. 10. Here is the entire paragraph, which nicely fits the theme of the present essay:
“For the boundary between “city” and “nature” has been obliterated: the city of men, once an enclave in the nonhuman world, spreads over the whole of terrestrial nature and usurps its place. The difference between the artificial and the natural has vanished, the natural is swallowed up in the sphere of the artificial, and at the same time the total artifact (the works of man that have become “the world” and as such envelop their makers) generates a “nature” of its own, that is, a necessity with which human freedom has to cope in an entirely new sense.”
The most notable recent case, of course, is Paul Kingsnorth’s new book, Against the Machine.
Two excellent essays here on Substack in recent years which look at modern digital technology in terms of the history of land enclosure are by Paul Kingsnorth (“A Monster that Grows in Deserts”) and L. M. Sacasas (“The Enclosure of the Human Psyche”).
Dougald Hine recently published an essay about the legacy of Ivan Illich, who employed the language of “Exodus”. “On the other shore of today’s Nile lies a still unexplored anthropogenetic desert that we are called to enter.” Illich said. “Understanding the characteristic features of new artifacts has become the necessary preface to this step: to dare chaste friendship, intransitive dying, and a contemplative life in a technogenic world.”
Do read Dougald’s essay, which has many resonances with the present essay, and especially with these concluding paragraphs.





I’ve planned an essay which I think should be a good complement to this - thanks for writing it Patrick, much to chew over.
This is an astonishing essay. The point that resonated the most with me is the notion that whenever we interact with and produce some kind of output with an LLM, we are essentially drawing on the sum total of whatever data the model was trained on which means human thoughts and human writing. It's no wonder the ultimate goal of these companies is to eventually get every single thing ever written by humans into them - scaling still seems to be the way some believe they will reach further breakthroughs while others are recognizing that they are already starting to hit some walls. Regardless, most believe the technical obstacles to AGI will be overcome - how soon is the subject of much dispute - but the disconnect between what the public understands when it comes to AI and what Silicon Valley and the actual people building these models believe is widening more and more. I just started More Everything Forever by Adam Becker and I think he captures the utter alien worldview of so many of the people in tech that most are unaware of. I am going to hear Ray Kurzweil (The Singularity is Nearer) speak on Thursday and he is a perfect example of the messianic vision of AI held by some very, very smart people whose predictions can seem downright loony to the rest of us. I enjoyed your piece but I tend to see AI in much narrower terms at the moment, though it's ability to manipulate and rearrange text in meaningful and astonishing ways still shocks me on a daily basis, but I'm not sure chat bots are the way we will ultimately interact with it in the long run.