Your article made me think of the "slipperiness" of the behavior of LLMs, where one cannot ever exactly predict what it will do, or explain why it did it.
Here is an example: "We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a 'student' model learns to prefer owls when trained on sequences of numbers generated by a 'teacher' model that prefers owls."
Exactly - this is at the root of the Yudkowsky/Soares argument: AI systems are 'grown,' not designed, meaning that they will have qualities and will do things that their makers didn't intend and don't know about. The claim is that this, in combination with sufficient computational power, will reveal catastrophic misalignment which had remained hidden in every prior phase of the development process.
There's a lot of writing out there about technology (and especially AI) these days. You'll find that all of these authors are coming from very different places and often have quite diverging views. Enjoy!
Interesting contextual read, but it's important to note the two clear reasons that the Yud thesis is fundamentally flawed: 1) there is no reason to believe that an AI would be self-improving, and 2) the kind of nanotechnology their ideas hinge on is pure science fiction. For deeper explanations of these, see "More Everything Forever."
Yes, fair enough. It's mainly the contextual framing I'm after here, rather than trying to defend every feature of the Yudkowsky and Soares argument.
I thought Adam Becker's book was very good (I've even mentioned it in a footnote here). I'm going to go back and re-read his chapter covering this stuff now that I've spent some time with If Anyone Builds It.
Immense credit for the balance, I'll admit I'm so hostile to the broader and deeper ideological context of Rationalism that I am waiting for a moment when reading "If Anyone" won't ruin my whole week.
Thank you for this: you’ve provided the best explanation and example of Ellul’s “technique” that I’ve seen thus far, as well as a very insightful way of looking at the AI issue within this broader context.
Wow! I keep thinking that old books are filled with so much more wisdom. They anticipate what is happening now through the lens of what was happening before. One of my favorites is The Phenomenon of Man by Teilhard de Chardin. He was a French Catholic priest who tried to give religion an evolutionary angle. He talked about an Omega Point, the highest level of human evolution.
The 20th century is filled with all of the elements for us to think about how humans were changing. The AI chapter is just the most recent in a very long book.
Thanks for reading, Arturo! The premise behind much of what I do is just as you said: old books offer so much of what we need to think about what is happening today.
I read The Phenomenon of Man a long time ago, and should probably pick it up again. One more recent book which deals with some of these ideas which I really enjoyed is God, Human, Animal, Machine by Meghan O'Gieblyn (2021). Teilhard de Chardin makes a few appearances. Highly recommended!
I'm appreciative of this article, I didn't know about this idea of Technique. At first I was confused by the use of the word "technique" because etymologically it means "skill, craft, art" and in contemporary use we it more like skill -- an artist, orator, or musician can indeed have great technique. After a bit of research (I have *not* read the book) I learned the English version wasn't capitalized properly, and when Ellul spoke of technique as the totalizing system oriented toward maximum efficiency, he actually capitalized it.
In that way, Technique was distinguished from technique, which seems very important to me, as technique, both etymologically and contemporarily, are kind of the antidote to Technique. Valuing the *human* craft, skill, and art of a process (which is far more than the sum of what is learned rationally) are vital to orienting toward beauty (not to say there isn't something efficient about great technique or beautiful about efficiency). It seems important to distinguish between *kinds* of technology as well, since a stretched canvas and paintbrush are technologies, toilets are a technology... having not read the book, is there some time at which Ellul delineates modern Technique from that of the past?
Thank you for reading, Mari. You're right, the terminology here can be confusing. I had been tempted either to capitalize or italicize the term 'technique,' as used in Ellul's distinctive sense. I decided against it because the English translation doesn't do this, nor do most commentators. Perhaps that's not the best way to proceed, and invites confusion.
Ellul spends some time early in the book discussing pre-modern technique (Ch. 1: Techniques: Historical Development) before moving to modern technique (Ch. 2: Characteristics of Modern Technique). Ellul thinks that modern technique has several distinct features, but perhaps the most important is that, while pre-modern technique existed as one of many modes of human activity within a given culture, modern technique has taken over the whole of culture, and has become the sole determinant of what should be done.
Thank, I'm sparked to dig in more. I'm curious when pre-modern shifted to modern. Much appreciated. Capital T Technique is indeed an interesting pondering.
Thought provoking piece, and I take your point about the danger not being that "AI" might become self-aware (it definitely won't). I also share your enthusiasm for the work of Ellul (and alongside him, Merchant, Illich, Duden, and Virlio). But I'd point out that this supercomputing only presents a generalized physical threat if machines were given the capacity to, say, enter the nuclear launch codes. Apart from that, it is constrained by plain material limits in its production and application. The more immediate threat--in the material sense--is economic and ecologic, where in each case it may "break the bank." It certainly isn't living up to its hype beyond, eg, China's employment of it for manufacturing (which other countries can't match).
The objection that these systems are limited to immaterial cyberspace is a common one. This is, to my mind, the most difficult feature of the book’s argument to accept: if AI is just a bunch of circuitry, how could it possibly do anything that would directly affect people in the real world—let alone anything so devastating? Yudkowsky and Soares don’t think this would be as difficult as it sounds, considering the way everything is—or potentially is—connected by the internet. I won’t rehearse their answer here because anything I write might be distorted or misleading. I’m not really competent to articulate and defend the specifics of their explanation, but this video, which runs through one possible scenario offered in the book, does touch on this question:
I’m with you on the ecological and economic risks here, too. I’d rather this not be the case, but it seems to me that all of these potential dangers of AI are all too real.
Right, but computers are used to help make the determination of whether to use nuclear weapons. So even if a human being makes the ultimate decision using some totally discrete electronic architecture, the influence of the machine is still there. Apparently it is already common for human decision makers to default to the calculations of a computer, rather than trust their own judgment. Imagine how much more difficult it would be to trust your own judgement in a situation in which mass death is on the line and a super intelligent machine is telling you to pull the trigger.
You’re absolutely right. The whole concept of "humans in the loop" assumes that the human won't itself be reduced to yet another mechanism. It's hard to imagine that won't happen, though, when the rest of the system is understood to be a 'superintelligence,' which applies a great deal of pressure to agree with its decisions.
Commented first thing in the morning. Later thought: wait, in what decade is the internet discrete from our physical machinery, from critical infrastructure even? Like, is Stan's comment from 1995? Yeah man, an alien electronic hyper-intelligence could make literal mincemeat out of us in the current setup. Have yet to hear the cogent pro argument. "It definitely won't" ain't it.
Your article made me think of the "slipperiness" of the behavior of LLMs, where one cannot ever exactly predict what it will do, or explain why it did it.
Here is an example: "We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a 'student' model learns to prefer owls when trained on sequences of numbers generated by a 'teacher' model that prefers owls."
https://alignment.anthropic.com/2025/subliminal-learning/
Exactly - this is at the root of the Yudkowsky/Soares argument: AI systems are 'grown,' not designed, meaning that they will have qualities and will do things that their makers didn't intend and don't know about. The claim is that this, in combination with sufficient computational power, will reveal catastrophic misalignment which had remained hidden in every prior phase of the development process.
Thanks for sharing. I had heard of Ellul from Paul Kingsnorth but now I want to read Yudkowsky and Becker!
There's a lot of writing out there about technology (and especially AI) these days. You'll find that all of these authors are coming from very different places and often have quite diverging views. Enjoy!
Thanks
Interesting contextual read, but it's important to note the two clear reasons that the Yud thesis is fundamentally flawed: 1) there is no reason to believe that an AI would be self-improving, and 2) the kind of nanotechnology their ideas hinge on is pure science fiction. For deeper explanations of these, see "More Everything Forever."
Yes, fair enough. It's mainly the contextual framing I'm after here, rather than trying to defend every feature of the Yudkowsky and Soares argument.
I thought Adam Becker's book was very good (I've even mentioned it in a footnote here). I'm going to go back and re-read his chapter covering this stuff now that I've spent some time with If Anyone Builds It.
Immense credit for the balance, I'll admit I'm so hostile to the broader and deeper ideological context of Rationalism that I am waiting for a moment when reading "If Anyone" won't ruin my whole week.
Thank you for this: you’ve provided the best explanation and example of Ellul’s “technique” that I’ve seen thus far, as well as a very insightful way of looking at the AI issue within this broader context.
Glad you found it helpful, Linda!
Wow! I keep thinking that old books are filled with so much more wisdom. They anticipate what is happening now through the lens of what was happening before. One of my favorites is The Phenomenon of Man by Teilhard de Chardin. He was a French Catholic priest who tried to give religion an evolutionary angle. He talked about an Omega Point, the highest level of human evolution.
The 20th century is filled with all of the elements for us to think about how humans were changing. The AI chapter is just the most recent in a very long book.
Great Essay!
Thanks for reading, Arturo! The premise behind much of what I do is just as you said: old books offer so much of what we need to think about what is happening today.
I read The Phenomenon of Man a long time ago, and should probably pick it up again. One more recent book which deals with some of these ideas which I really enjoyed is God, Human, Animal, Machine by Meghan O'Gieblyn (2021). Teilhard de Chardin makes a few appearances. Highly recommended!
I'm appreciative of this article, I didn't know about this idea of Technique. At first I was confused by the use of the word "technique" because etymologically it means "skill, craft, art" and in contemporary use we it more like skill -- an artist, orator, or musician can indeed have great technique. After a bit of research (I have *not* read the book) I learned the English version wasn't capitalized properly, and when Ellul spoke of technique as the totalizing system oriented toward maximum efficiency, he actually capitalized it.
In that way, Technique was distinguished from technique, which seems very important to me, as technique, both etymologically and contemporarily, are kind of the antidote to Technique. Valuing the *human* craft, skill, and art of a process (which is far more than the sum of what is learned rationally) are vital to orienting toward beauty (not to say there isn't something efficient about great technique or beautiful about efficiency). It seems important to distinguish between *kinds* of technology as well, since a stretched canvas and paintbrush are technologies, toilets are a technology... having not read the book, is there some time at which Ellul delineates modern Technique from that of the past?
Thank you for reading, Mari. You're right, the terminology here can be confusing. I had been tempted either to capitalize or italicize the term 'technique,' as used in Ellul's distinctive sense. I decided against it because the English translation doesn't do this, nor do most commentators. Perhaps that's not the best way to proceed, and invites confusion.
Ellul spends some time early in the book discussing pre-modern technique (Ch. 1: Techniques: Historical Development) before moving to modern technique (Ch. 2: Characteristics of Modern Technique). Ellul thinks that modern technique has several distinct features, but perhaps the most important is that, while pre-modern technique existed as one of many modes of human activity within a given culture, modern technique has taken over the whole of culture, and has become the sole determinant of what should be done.
Thank, I'm sparked to dig in more. I'm curious when pre-modern shifted to modern. Much appreciated. Capital T Technique is indeed an interesting pondering.
Thought provoking piece, and I take your point about the danger not being that "AI" might become self-aware (it definitely won't). I also share your enthusiasm for the work of Ellul (and alongside him, Merchant, Illich, Duden, and Virlio). But I'd point out that this supercomputing only presents a generalized physical threat if machines were given the capacity to, say, enter the nuclear launch codes. Apart from that, it is constrained by plain material limits in its production and application. The more immediate threat--in the material sense--is economic and ecologic, where in each case it may "break the bank." It certainly isn't living up to its hype beyond, eg, China's employment of it for manufacturing (which other countries can't match).
https://stanleyabner1951gmailcom.substack.com/p/the-real-danger-of-ai
Thanks for reading!
The objection that these systems are limited to immaterial cyberspace is a common one. This is, to my mind, the most difficult feature of the book’s argument to accept: if AI is just a bunch of circuitry, how could it possibly do anything that would directly affect people in the real world—let alone anything so devastating? Yudkowsky and Soares don’t think this would be as difficult as it sounds, considering the way everything is—or potentially is—connected by the internet. I won’t rehearse their answer here because anything I write might be distorted or misleading. I’m not really competent to articulate and defend the specifics of their explanation, but this video, which runs through one possible scenario offered in the book, does touch on this question:
https://www.youtube.com/watch?v=D8RtMHuFsUw
I’m with you on the ecological and economic risks here, too. I’d rather this not be the case, but it seems to me that all of these potential dangers of AI are all too real.
Right, but computers are used to help make the determination of whether to use nuclear weapons. So even if a human being makes the ultimate decision using some totally discrete electronic architecture, the influence of the machine is still there. Apparently it is already common for human decision makers to default to the calculations of a computer, rather than trust their own judgment. Imagine how much more difficult it would be to trust your own judgement in a situation in which mass death is on the line and a super intelligent machine is telling you to pull the trigger.
You’re absolutely right. The whole concept of "humans in the loop" assumes that the human won't itself be reduced to yet another mechanism. It's hard to imagine that won't happen, though, when the rest of the system is understood to be a 'superintelligence,' which applies a great deal of pressure to agree with its decisions.
Commented first thing in the morning. Later thought: wait, in what decade is the internet discrete from our physical machinery, from critical infrastructure even? Like, is Stan's comment from 1995? Yeah man, an alien electronic hyper-intelligence could make literal mincemeat out of us in the current setup. Have yet to hear the cogent pro argument. "It definitely won't" ain't it.
Excellent piece, thank you