Technology Is Already ‘Out of Control’
Yudkowsky’s Demon and the autonomy of technique
Welcome to Ever Not Quite—essays about technology and humanism.
This one is about the warnings which we now hear regularly that AI might soon exceed human control, to catastrophic effect. Looking to the twentieth century sociologist and philosopher of technology Jacques Ellul, I consider the possibility that technology is, in some important sense, already out of control.
As always, thank you for reading. If you’d like to support my work, consider subscribing and sharing this essay around within your circles. It helps more than you may know.
A few weeks ago, I finished a book with an extraordinary title. If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares has received a lot of attention this fall after its release in September, due not least to its title’s incendiary prediction. The “It”, as you’ve likely guessed, refers to superintelligent AI created “using anything remotely like current techniques”, and the “Everyone” refers to, well, everyone. The book is a darker and more succinct update to Nick Bostrom’s influential Superintelligence: Paths, Dangers, Strategies which made waves upon its release in 2014, and was itself already a product of Yudkowsky’s influence.
It would be absurd to praise this book as we might praise just about any other volume pulled from the nonfiction shelf, commending it as, say, an interesting or illuminating read, an insightful analysis of an important issue of our time. This is not because the book isn’t all of those things, but because its purpose isn’t to inform, much less to entertain, so much as it is to alarm. Indeed, that title is not intended to be read as a provocative metaphor or as a dramatic bit of hyperbole wielded to drive sales, but is meant to be taken quite literally; its co-authors are shaking you by the shoulders and shouting “This is an emergency!”
And yet, the title provides a better description of the book’s intention than of its tone. Strange as it may sound for a book of such apocalyptic premonitions, it reads less like a work of frantic doomsaying than a patient and sober-minded argument in favor of the view that everyone on earth is likely to soon be killed, directly or indirectly, by a rogue machine superintelligence. Simply put, its co-authors believe that we will soon pass a point, if we haven’t done so already, at which an AI system begins to devise and carry out a plan in pursuit of some inscrutable goal of which we will likely never know, and which is incompatible with the continuation of human life on earth—and by the time we see what is happening, it will already be much too late to stop it.
Before I proceed, I should acknowledge that these worries will seem implausible to many of you reading this. Perhaps you have already dismissed the book and its co-authors as “useful idiots of AI doomsaying”, as the title of a piece in The Atlantic by the physicist-turned-journalist Adam Becker put it back in September.1 But let me take a moment to quell some common misunderstandings before coming to the wider lens through which I want to look at these warnings. I should confess up front that I am a good bit more concerned about these kinds of dangers than many other people who nurture a healthy incredulity towards the ambitions of the modern tech industry.
Skeptics of the very notion of artificial intelligence sometimes assume that it cannot possibly pose these sorts of dangers without possessing an authentic consciousness—and because consciousness is not the sort of quality that can abide in mere silicon, these warnings are nothing but fantasies in the overactive techno-fundamentalist imagination. Extraordinary catastrophes, in other words, require extraordinary sentience. But this, I’m sorry to say, is untrue, and it reflects a critical misunderstanding of what Cassandras like Yudkowsky and Soares are actually claiming. In fact, you don’t need to believe much of anything about artificial intelligence or consciousness—and still less about what it might really want—to acknowledge the possibility of some pretty dire outcomes.
Yudkowsky and Soares aren’t trying to tell us what machine intelligence is, exactly, only what it could do. “Once AIs get sufficiently smart, they’ll start acting like they have preferences, like they want things”, they write in an early chapter. “We’re not saying that AIs will be filled with human-like passions; we’re saying they’ll behave like they want things. They’ll tenaciously steer the world toward their destinations, defeating any obstacles in their way.” Whether or not this behavior warrants a term like ‘wanting’ in the robust sense of the word, they maintain, “[t]hat’s between you and your dictionary”; or, perhaps, it’s between you and whatever theory of mind you happen to be working with. The point is that the co-authors are agnostic about philosophical questions of this kind, at least as far as their predictions are concerned.
But this essay isn’t meant to be a summary of the book or any of its specific arguments. There are plenty of misunderstandings floating around about what, exactly, Yudkowsky and Soares are saying, and I would suggest reading it before making up your mind.2
Critics of AI might proceed from one of two opposite perspectives: one perspective highlights its failures—all the hallucinations, its embarrassing lack of common sense (what Gary Marcus calls “boneheaded errors”), and all the other ways that it falls short of delivering what its architects promise. The other perspective points out the unforeseen dangers brought about by its very successes—how the unique capabilities of AI could go wrong in ways that a less sophisticated technology couldn’t. Even nuclear weapons don’t decide to launch themselves; if the world is to end in a nuclear inferno, human error is required somewhere along the way. But critics who believe in the efficacy of AI argue that the speed of its computations (which vastly exceeds human time-scales) and the complexity of its internal structure (which vastly surpasses human comprehension) introduce dangers of a novel kind. Despite their diverging forecasts for its ‘successful’ construction, then, AI-optimism and AI-doomerism are just the upward and downward factions of an attitude that takes the capacities of the technology most seriously—and it is precisely their belief in its power that has Yudkowsky and Soares standing athwart the AI industry, yelling Stop.
If I can be permitted the rhetorical flourish, I’ll call the artificial superintelligence that they predict will emerge at the end of this bleak trajectory ‘Yudkowsky’s Demon’: it is an entity of pure, machinic rationality, and the menacing visage of the current machine learning paradigm extended to its ultimate conclusion.3 Unlike its Laplacean namesake, Yudkowsky’s Demon is not expected to remain a thought experiment, an apparition of theoretical conjecture, for long; it is a real threat laid down in silicon, capable of doing actual harm to everything we might care about. And according to the co-authors, it is the predictable and inescapable culmination of the path down which we are quickly sliding, and its arrival will mark the moment that the fate of the world will have definitively broken free of human hands.

This brings me to the broader observation I want to make about the message of this book. When I consider these sorts of warnings about our impending loss of control over AI systems, I can’t help but think of Jacques Ellul.
The concept for which Ellul is perhaps best known is what he calls ‘technique’—borrowed directly from ‘la technique’ in his native French—which he develops at length in his 1954 classic The Technological Society. Technique is a broad abstraction which can be hard to pin down precisely, but Ellul’s own definition in the opening pages of the English translation, published a decade after the French original, is a good place to start: “technique”, Ellul explains, “is the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.”4 It is not an idea so much as an evolving set of technical procedures for achieving results with maximum efficiency. The influence of technique is not restricted to the design and operation of actual machines; it is a “collective sociological reality” giving rise to a technological system which pre-exists the choices available to individuals and which largely determines their outcomes.5 It answers the question how shall we proceed? everywhere in the same way: act in such a way as to maximize efficiency.
But this account is too academic. In more colloquial terms, technique is at the bottom of the most determinative features of modern life: it is the pressure, in every area of life, always to adopt the newer, more efficient method, even if you prefer the old way; it is needing to buy a new laptop because your old one couldn’t accommodate the latest software update now required to continue performing the same functions;6 it is the fact that the young are rewarded for pursuing fields like STEM and finance and punished for their interest in the arts and humanities; it is the meritocratic achievement trap into which we throw our children which assigns the credentials that are taken to be the primary condition for social respectability; it is an approach to psychology which seeks to maximize ‘mental health’ defined in terms of the optimal balance of neurotransmitters in the brain and which is best achieved through pharmacological interventions; it is the enclosure of the human capacity for contemplation by an attention economy which captures and extracts value from the activity of the nervous system; it is the insatiable appetite for automation, surveillance, and planned obsolescence; and it is the rhetoric of technological inevitability.
We pursue these things, Ellul maintains, not because they are good, but because the system within which we live and which has trained us how to think dictates that we must. Going against the current might be tolerated, but it won’t be celebrated—at least not by the culture at large—because modern technique must deny any meaningful freedom to choose otherwise.
In Ellul’s day, technique was the Space Race between Cold War rivals; today, it is the AI-race between the U. S. and China. It motivates with promises of economic growth, technological progress, and geopolitical influence, and it disciplines through the fear of being left behind. And for AI-doomers, whether or not they recognize the role of this broader sociological milieu, it is the system of methods and incentives which culminates in the creation of Yudkowsky’s Demon, the ultimate endpoint to which the triumph of technique is set to bring us.
Of course, technique as a method for accomplishing tasks is nothing new; efficiency has always had a place in human affairs. But modern technique exhibits several peculiar features which distinguish it from its earlier, pre-modern manifestations. “Herein lies the inversion we are witnessing”, Ellul observes. “Without exception in the course of history, technique belonged to a civilization and was merely a single element among a host of nontechnical activities. Today technique has taken over the whole of civilization.”7 This is Ellul’s chief historical contention—and recall that he was already articulating this conclusion in the early-1950s.
The feature of modern technique which is most relevant in the present context is what Ellul calls autonomy. Having liberated itself from its former subservience to culture and human judgment, technique proceeds without reference to anything outside itself. Its activity is guided by the furtherance of its own aims, unchecked by the authority of any other principle which might impose some limitation on pure technical ambition. “The madman”, G. K. Chesterton wrote, “is not the man who has lost his reason. The madman is the man who has lost everything except his reason.”8 For Ellul, the mad society is the society which has lost everything except its drive to technical efficiency; specifically, it has forsaken any other horizon against which these pursuits might be judged. This is the technological society.
This is all to say that, if Jacques Ellul had the opportunity to read If Anyone Builds It, Everyone Dies, I suspect he would be quick to point out that the sociological reality which pre-exists the development of AI and is responsible for building it in the first place is itself already out of our control. I’m borrowing this observation from my friend Bryan Heystee, who has argued as much here. “The prospect of a technological creation getting out-of-control is not a looming possibility which we should be careful to avoid or mitigate,” Bryan writes, “but a present reality which presses in upon us at this very moment.” Yudkowsky’s Demon is just the monstrous expression—and, if the warnings are to be believed, the dramatic culmination—of the indomitable technical phenomenon which has long enthralled the modern world.
Technique, then, is the means by which a dangerous artificial superintelligence might be created in the first place, but its autonomy is what makes us powerless to restrain it. AI is perhaps the clearest contemporary example we have of the relentless pursuit of technical efficiency, unhindered by common sense or sound social and political discernment. But if not for the autonomy of technique, preventing its most destructive consequences would be a straightforward matter; simply recognizing the pathological drift of its current development or the risks associated with artificial superintelligence would be sufficient for other sources of guidance to prevail. The fact that these periodic open letters signed by various influential people—notably, the “Pause” letter from 2023 as well as the Statement on Superintelligence from earlier this month—predictably lead to no change in the industry’s velocity suggests the degree to which the technical phenomenon has extricated itself from any recognizably human purpose. The existential danger of superintelligence and our collective inability to actually do anything about it thus share a common source: autonomous technique, which is both the means of its creation and also the technical and economic milieu which renders it practically impossible to change course.
Of course, Yudkowsky and Soares do not pretend to offer an account of the sociological forces responsible for leading us to this pass; their goal is to freak you out so you’ll demand that artificial superintelligence not be built by anyone, on the remote possibility that these forces could somehow be resisted. But if you want to understand what has brought us to the point where we risk so perilous an encounter with Yudkowsky’s Demon, you could do worse than to look to Ellul, who demonstrates how warnings of this kind pick up a much longer drama only as the curtain is coming down on the technological civilization which has staged the entire play.
From this point of view, the weakness of the book’s analysis has less to do with its arguments than with the narrowness of its perspective. The story which Yudkowsky and Soares are telling begins in a world where the development of superintelligence is already nearing completion. But if it ever does escape our control, that will only be because it is the offspring of a much larger system which has long since thrown off any reference to other sources of meaning or legitimacy. As valuable as a book like If Anyone Builds It, Everyone Dies can be in calling attention to the tangible risks associated with AI, it can offer no real insight into the broader sociological dilemma. If Ellul does not offer up to the imagination the vivid scenes of cinematic peril of which the AI-doomers are capable, this is because the technological system as a whole does not lend itself to these sorts of dramatic illustrations.
We need not go in unreservedly either for the scenario predicted by Yudkowsky and Soares or for the diagnosis offered by Ellul to recognize in the former an acceleration and intensification of the principle which the latter had identified decades earlier. Both of their books have been criticized as overly gloomy and deterministic, and let’s hope that future hindsight will vindicate that charge. But whatever we make of their specific predictions for the near future, it would be a mistake to overlook the role of the technological society which has rendered such a cataclysmic conclusion thinkable at all.
Adam Becker’s excellent book More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, which came out last spring, is a sweeping rebuttal of tech-boosterism of all kinds, effectively taking the wind out of the sails of irrational exuberance and doomsaying alike.
Using this term will be more economical than repeatedly referring to “machine superintelligence as conceived by Yudkowsky and Soares.”
The appellation may seem a disservice to the book’s other co-author, Nate Soares. But it is widely acknowledged that Eliezer Yudkowsky is the original author of these predictions going back decades. Besides, the book also appears to be mostly Yudkowsky’s work: “He wrote 300 percent of the book,” his co-author, Mr. Soares, quipped. “I wrote another negative 200 percent.”
Jacques Ellul, The Technological Society, pg. xxv.
Ibid., pg. xxviii.
This happened to me last year, and the laptop on which I type was purchased for exactly this reason.
Jacques Ellul, The Technological Society, pg. 128.
G. K. Chesterton, Orthodoxy, ch. 2. Thanks to L. M. Sacasas for surfacing this line in a post on a similar topic from several years ago.


Your article made me think of the "slipperiness" of the behavior of LLMs, where one cannot ever exactly predict what it will do, or explain why it did it.
Here is an example: "We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a 'student' model learns to prefer owls when trained on sequences of numbers generated by a 'teacher' model that prefers owls."
https://alignment.anthropic.com/2025/subliminal-learning/
Thanks for sharing. I had heard of Ellul from Paul Kingsnorth but now I want to read Yudkowsky and Becker!