This is an astonishing essay. The point that resonated the most with me is the notion that whenever we interact with and produce some kind of output with an LLM, we are essentially drawing on the sum total of whatever data the model was trained on which means human thoughts and human writing. It's no wonder the ultimate goal of these companies is to eventually get every single thing ever written by humans into them - scaling still seems to be the way some believe they will reach further breakthroughs while others are recognizing that they are already starting to hit some walls. Regardless, most believe the technical obstacles to AGI will be overcome - how soon is the subject of much dispute - but the disconnect between what the public understands when it comes to AI and what Silicon Valley and the actual people building these models believe is widening more and more. I just started More Everything Forever by Adam Becker and I think he captures the utter alien worldview of so many of the people in tech that most are unaware of. I am going to hear Ray Kurzweil (The Singularity is Nearer) speak on Thursday and he is a perfect example of the messianic vision of AI held by some very, very smart people whose predictions can seem downright loony to the rest of us. I enjoyed your piece but I tend to see AI in much narrower terms at the moment, though it's ability to manipulate and rearrange text in meaningful and astonishing ways still shocks me on a daily basis, but I'm not sure chat bots are the way we will ultimately interact with it in the long run.
Thank you for reading, Stephen! I really enjoyed Adam Becker's book. I found it to be quite effective in throwing some much-needed cold water over the more outlandish aspirations of the techno-optimists—Ray Kurzweil perhaps chief among them. (Apologies for the slow reply. I've been on the road traveling for the last several days.)
Great connection! It's been a long time since I read The Machine Stops, and it didn't really occur to me while I was writing this. I should return to it now in light of this essay.
I pulled Leviathan off the shelf a few months ago, with some dim feeling that I should think about the artificial soul again, now. But I haven’t opened it, and this is more carefully woven than whatever I would have knitted together! It’s also chilling enough to remind me why it is that political theology always makes me go looking for something mystical. I suppose that’s where I go, to look for the escape hatch.
Understandably so. I'm tempted that way myself. I felt like I needed to stay in the social/political dimension towards the end here. I didn't wind up using it, but that Hölderlin line (which Heidegger loves to quote) kept coming to mind: "Where the danger is, grows the saving power also;" problems created by systems of social collaboration may, perhaps, be addressed by social collaboration of a different sort. Not an escape hatch, but a treatment which stays close to the trouble.
This was a wonderful essay and I thoroughly enjoyed reading it. Thinking about AI in the context of Hobbes and the state is a provocative idea and one which was unexpected for me. As with most unexpected ideas, I found myself initially skeptical and resistant and then increasingly persuaded by your argument. I especially like the idea of an emergent superagent which is neither creature nor artifice -- this seems like a very useful lens for understanding much of our social organization. Nevertheless, there are still a few parts of your essay that make me uneasy. I am not confident about any of these questions/objections, and I suspect you would have good answers in hand. I also suspect I assuming you are making stronger claims than you really are and that I am ignoring important qualifications and details of your essay. Nevertheless, I raise these thoughts anyway for your consideration and my enlightenment.
1. First, I wonder about the very idea of framing AI as part of a continuous development that started with the Hobbesian Leviathan. This development, obviously, would be historical, i.e. a change that is taking place in time so that the future is different from the past. From the perspective of the 21st century and AI, this seems plausible enough, but I think Hobbes would find such an idea unthinkable or at least implausible. Hobbes, in a certain sense, is not an historical thinker: the state of nature which is the basis of his political philosophy fundamentally does not change, nor does human nature in particular change. That is, the basis of politics is in a certain sense eternal. I suspect the significance of this is that the nature of the state too fundamentally does not change. I wonder if Hobbes could have imagined that states would look significantly different several hundred years into the future from the way they did in his day. That is all to say, I am not sure Hobbes would agree that his state could develop into something like an algorithm because, in principle, it couldn't develop into anything.
Of course, I am implicitly assuming that Hobbes is the ultimate authority on the significance of his own thought, an assumption which many smart people (e.g. Kant, Hegel) would disagree with. But I think it is worth acknowledging that your interpretation of Hobbes does not itself agree with Hobbes' own thought.
2. I find it intuitively implausible that your list of superagents includes only "modern states, corporations, and algorithms." Was it your intention to suggest that there are only three (or only three principle) kinds of superagents? It seems strange to me that two kinds of superagents (the state, corporations) would exist in Hobbes' day, and then the only other kind would not come into existence for another 300 years. Surely there was something in between besides the development and maturation of states and corporations?
3. You remarked that "Leviathan is a peculiarly creature-like machine made by human beings in order to protect organic life--and in order to conceive of it, we had first to reinterpret life itself in machine-like terms." i.e. the very idea of the modern state has renovated the way human beings understand themselves and what they are. Do you think algorithms and AI have had (or will have) an analogous effect? Do you see the rise of AI as significantly changing the way human beings understand what they themselves are?
4. Although there are certainly common features between states/corporations and algorithms/AI, I am not yet convinced they are similar enough to group them together or to frame them as part of a continuous development. The crux of my reluctance is the idea of consent. The Hobbesian state is founded with the consent of its members who recognize that living within the power of state, however imperfect, is preferable to the state of nature. The State continues to be legitimate because, at some level, its members continue to consent to its power because they continue to recognize the State remains in their overall interests (i.e. not dying a brutal death). Corporations exist in a similar way. Although a corporation may have goals alien to those of any of its members (employees, shareholders, etc.), each of its members recognize that participation in the corporation remains in their overall interests and thereby consent to the corporations activities. In effect, as you and Hobbes point out, states and corporations are both commonwealths, artificial persons which contain natural persons. And depend on natural persons for their activities.
AI, as you point out, contains only "mere fragments" of natural persons. It does not contain human beings or depend on them for its continued activities. (Obviously some computer programmers are still necessary, but the AI which uses my "fragments" no longer depends on my active membership or activity.) It therefore seems a stretch to call AI a commonwealth like states or corporations. (I know you didn't call it a commonwealth, but a superagent, but I just want to be clear that I think commonwealth would be an inappropriate word.) It also doesn't feel like I have "consented" to AI in the same way that I have consented to the state. AI came into existence without my knowledge and it took up my "fragments" while I was unaware. I was forced to become a part of various AIs and have no option of leaving them. I could stop using them and do my level best to stop providing them training data, perhaps by going off-grid and becoming a hermit. I am participating in AI through my "fragments" and these fragments cannot meaningfully be removed from an AI model.
It seems to me, then, that AI is categorically different from states and corporations. Even if it too is a superagent, it is a superagent of a significantly different kind. It does not contain anyone, and to the extent that it contains us metaphorically through our fragments, we can't consent to being a part of it.
Loved reading this. Haven’t seen Masters… but think you are harsh on Pacific (except for the R&R episode). What I liked about Pacific was following characters I could get to know - contrary to your experience, I found BoB too “bitty” and never really got to know anyone well enough to care (apart from Winters, obviously!).
Thank you for reading! You're right - the way we define technology is perhaps the most fundamental place to start, but any definition inevitably courts controversy. The definition has always been problematic because of the way technology, which is inherently 'artificial' in some sense, folds itself back into human nature; what you wind up with is a strange mixture of nature and artifice which can't really be pulled apart.
I see what you mean - there's definitely something important in the transformation of nature into technology by way a process of extraction. This is definitely characterizes the way the modern world has gone about pursuing technological development, but I don't think it's the only way.
It can be a bit technical, but you might look at The Nature of Technology: What It Is and How It Evolves by W. Brian Arthur. It goes quite deeply into a theory and definition of technology.
I’ve planned an essay which I think should be a good complement to this - thanks for writing it Patrick, much to chew over.
Thank you for reading, Nik. Looking forward to seeing what you come up with.
This is an astonishing essay. The point that resonated the most with me is the notion that whenever we interact with and produce some kind of output with an LLM, we are essentially drawing on the sum total of whatever data the model was trained on which means human thoughts and human writing. It's no wonder the ultimate goal of these companies is to eventually get every single thing ever written by humans into them - scaling still seems to be the way some believe they will reach further breakthroughs while others are recognizing that they are already starting to hit some walls. Regardless, most believe the technical obstacles to AGI will be overcome - how soon is the subject of much dispute - but the disconnect between what the public understands when it comes to AI and what Silicon Valley and the actual people building these models believe is widening more and more. I just started More Everything Forever by Adam Becker and I think he captures the utter alien worldview of so many of the people in tech that most are unaware of. I am going to hear Ray Kurzweil (The Singularity is Nearer) speak on Thursday and he is a perfect example of the messianic vision of AI held by some very, very smart people whose predictions can seem downright loony to the rest of us. I enjoyed your piece but I tend to see AI in much narrower terms at the moment, though it's ability to manipulate and rearrange text in meaningful and astonishing ways still shocks me on a daily basis, but I'm not sure chat bots are the way we will ultimately interact with it in the long run.
Thank you for reading, Stephen! I really enjoyed Adam Becker's book. I found it to be quite effective in throwing some much-needed cold water over the more outlandish aspirations of the techno-optimists—Ray Kurzweil perhaps chief among them. (Apologies for the slow reply. I've been on the road traveling for the last several days.)
Those passages from Hobbes and de Toqueville are weirdly resonant! I also kept thinking of EM Forster’s The Machine Stops.
Great connection! It's been a long time since I read The Machine Stops, and it didn't really occur to me while I was writing this. I should return to it now in light of this essay.
I pulled Leviathan off the shelf a few months ago, with some dim feeling that I should think about the artificial soul again, now. But I haven’t opened it, and this is more carefully woven than whatever I would have knitted together! It’s also chilling enough to remind me why it is that political theology always makes me go looking for something mystical. I suppose that’s where I go, to look for the escape hatch.
Understandably so. I'm tempted that way myself. I felt like I needed to stay in the social/political dimension towards the end here. I didn't wind up using it, but that Hölderlin line (which Heidegger loves to quote) kept coming to mind: "Where the danger is, grows the saving power also;" problems created by systems of social collaboration may, perhaps, be addressed by social collaboration of a different sort. Not an escape hatch, but a treatment which stays close to the trouble.
Love this
Thanks for reading!
would love your thoughts on things I’ve written??
Hi Patrick,
This was a wonderful essay and I thoroughly enjoyed reading it. Thinking about AI in the context of Hobbes and the state is a provocative idea and one which was unexpected for me. As with most unexpected ideas, I found myself initially skeptical and resistant and then increasingly persuaded by your argument. I especially like the idea of an emergent superagent which is neither creature nor artifice -- this seems like a very useful lens for understanding much of our social organization. Nevertheless, there are still a few parts of your essay that make me uneasy. I am not confident about any of these questions/objections, and I suspect you would have good answers in hand. I also suspect I assuming you are making stronger claims than you really are and that I am ignoring important qualifications and details of your essay. Nevertheless, I raise these thoughts anyway for your consideration and my enlightenment.
1. First, I wonder about the very idea of framing AI as part of a continuous development that started with the Hobbesian Leviathan. This development, obviously, would be historical, i.e. a change that is taking place in time so that the future is different from the past. From the perspective of the 21st century and AI, this seems plausible enough, but I think Hobbes would find such an idea unthinkable or at least implausible. Hobbes, in a certain sense, is not an historical thinker: the state of nature which is the basis of his political philosophy fundamentally does not change, nor does human nature in particular change. That is, the basis of politics is in a certain sense eternal. I suspect the significance of this is that the nature of the state too fundamentally does not change. I wonder if Hobbes could have imagined that states would look significantly different several hundred years into the future from the way they did in his day. That is all to say, I am not sure Hobbes would agree that his state could develop into something like an algorithm because, in principle, it couldn't develop into anything.
Of course, I am implicitly assuming that Hobbes is the ultimate authority on the significance of his own thought, an assumption which many smart people (e.g. Kant, Hegel) would disagree with. But I think it is worth acknowledging that your interpretation of Hobbes does not itself agree with Hobbes' own thought.
2. I find it intuitively implausible that your list of superagents includes only "modern states, corporations, and algorithms." Was it your intention to suggest that there are only three (or only three principle) kinds of superagents? It seems strange to me that two kinds of superagents (the state, corporations) would exist in Hobbes' day, and then the only other kind would not come into existence for another 300 years. Surely there was something in between besides the development and maturation of states and corporations?
3. You remarked that "Leviathan is a peculiarly creature-like machine made by human beings in order to protect organic life--and in order to conceive of it, we had first to reinterpret life itself in machine-like terms." i.e. the very idea of the modern state has renovated the way human beings understand themselves and what they are. Do you think algorithms and AI have had (or will have) an analogous effect? Do you see the rise of AI as significantly changing the way human beings understand what they themselves are?
4. Although there are certainly common features between states/corporations and algorithms/AI, I am not yet convinced they are similar enough to group them together or to frame them as part of a continuous development. The crux of my reluctance is the idea of consent. The Hobbesian state is founded with the consent of its members who recognize that living within the power of state, however imperfect, is preferable to the state of nature. The State continues to be legitimate because, at some level, its members continue to consent to its power because they continue to recognize the State remains in their overall interests (i.e. not dying a brutal death). Corporations exist in a similar way. Although a corporation may have goals alien to those of any of its members (employees, shareholders, etc.), each of its members recognize that participation in the corporation remains in their overall interests and thereby consent to the corporations activities. In effect, as you and Hobbes point out, states and corporations are both commonwealths, artificial persons which contain natural persons. And depend on natural persons for their activities.
AI, as you point out, contains only "mere fragments" of natural persons. It does not contain human beings or depend on them for its continued activities. (Obviously some computer programmers are still necessary, but the AI which uses my "fragments" no longer depends on my active membership or activity.) It therefore seems a stretch to call AI a commonwealth like states or corporations. (I know you didn't call it a commonwealth, but a superagent, but I just want to be clear that I think commonwealth would be an inappropriate word.) It also doesn't feel like I have "consented" to AI in the same way that I have consented to the state. AI came into existence without my knowledge and it took up my "fragments" while I was unaware. I was forced to become a part of various AIs and have no option of leaving them. I could stop using them and do my level best to stop providing them training data, perhaps by going off-grid and becoming a hermit. I am participating in AI through my "fragments" and these fragments cannot meaningfully be removed from an AI model.
It seems to me, then, that AI is categorically different from states and corporations. Even if it too is a superagent, it is a superagent of a significantly different kind. It does not contain anyone, and to the extent that it contains us metaphorically through our fragments, we can't consent to being a part of it.
Loved reading this. Haven’t seen Masters… but think you are harsh on Pacific (except for the R&R episode). What I liked about Pacific was following characters I could get to know - contrary to your experience, I found BoB too “bitty” and never really got to know anyone well enough to care (apart from Winters, obviously!).
Thank you for reading! You're right - the way we define technology is perhaps the most fundamental place to start, but any definition inevitably courts controversy. The definition has always been problematic because of the way technology, which is inherently 'artificial' in some sense, folds itself back into human nature; what you wind up with is a strange mixture of nature and artifice which can't really be pulled apart.
I see what you mean - there's definitely something important in the transformation of nature into technology by way a process of extraction. This is definitely characterizes the way the modern world has gone about pursuing technological development, but I don't think it's the only way.
It can be a bit technical, but you might look at The Nature of Technology: What It Is and How It Evolves by W. Brian Arthur. It goes quite deeply into a theory and definition of technology.