Excellent essay, thank you. Some things spring to mind (which your essay covers) in the debate concerning technology in the workplace.
(1).People were starved off the land in order to get them into the factories, where they lost a lot of agency, self-determination, and community.
(2).The Capitalist system is one where the people who work in the business don't own it, and those who own it (mostly) don't work in it, thereby setting up an eternal conflict of interest between management (representative of the owners) and workers.
(3).The real issue is about meaningful lives, of which meaningful 'work' (how you spend your day) is a significant proportion. The idea of 'making a living' is a fairly novel (200 years) concept; and a rather distasteful concept given the fullness and richness of what humans are.
(4).I'm all for ending boring soul-destroying jobs; robotics has been doing this for 30+ years. But there needs to be a new economic model so people who have been 'set free' from the 9-5, can live well and develop their 'higher interests'. Isn't that always the promise of the latest technology? But I can't see this happening - to do so (in the current model) would be considered by business as an "added cost", and an unnecessary one at that. Given how costs are off-loaded onto the environment, off-leading people onto the scrap-heap can be well expected.
Techno-optimists are hyping the same old line. Remember in the 1950s how nuclear energy was going to provide "electricity so cheap it won't be worth billing you for it"?
As an aside, I was amused by you querying google whether "we" should anthropomorphize AI. It is indeed curious who the "we" is in the answer.
I might note, however, that the very act of "asking" (and "answering") seem themselves to be anthropomorphic. We "ask" other persons so to "ask" google anything is an implicit (if partial) anthropomorphization. Consider that we would never "ask" an encyclopedia anything. The tendency to anthropomorphize computers/software of all kinds is very difficult to resist.
Thank you for this timely account of AI in the workplace. It is quite striking how explicit the managerial class is about their intentions with AI -- how it is inevitable, how it is employees that must adapt, etc. etc.
Throughout your article, I could not help but think of Jacques Ellul's analysis of technique. It seems to me that Ellul's account of technique is particularly helpful in this coming age of AI and (allegedly) hyper-automation. For Ellul, the chief determining characteristic of modern society is the drive to absolute efficiency. The drive is "absolute" because there is no end beyond efficiency for the sake of which we pursue efficiency. This drive--"technique"--organizes and subordinates all other features of modern society. In his book, The Technological Society, Ellul argues that technique has become "autonomous" insofar as it has made itself the chief determining criterion according to which things are evaluated. In general, we do not ask whether technique is good, but whether a given activity is sufficiently technical (i.e. efficient). If an activity is not technical, it is rejected or branded as immoral.
It seems to me that the current commitment to AI, exemplified so well by the Harvard Business Review and its contributors, can be understood within the broader context of technique. AI is yet another manifestation of the overall commitment to efficiency. AI is inevitable because it promises to be more efficient. Employees must adapt themselves to AI because to do otherwise would be immoral in the overall context of technique.
Another way of putting Ellul's argument is that humans don't use technique, but rather technique uses humans. Humans do not freely choose their ends and then put various techniques to work to achieve those ends. Rather, technique molds and shapes human beings to fit the overall drive to efficiency. Whatever "ends" we think we normally have, these are most often given to us or trained into us by technique. Humans may be ill-suited to life on the factory floor, but the factory floor is efficient; therefore technique adjusts, molds, trains humans to tolerate the factory etc. The Harvard Business Review would seem to be unwittingly rehashing this point in arguing that employees must be adapted to the new fact of AI. AI doesn't serve human ends; humans serve AI's ends.
You touched on a number of these points in your essay, in some cases saying exactly the same thing in slightly different words. I only rehash them to illustrate that you may find reading Ellul a fruitful experience. He gives a realistic and practical account of technological determinism but without recourse to the esotericism in Heidegger that so many dislike. (In fact, I think Ellul refused to ready anything of Heidegger at all, as a matter of principle.)
One implication of Ellul's argument, however, is that efficiency (and/or AI) will not give over to leisure once it reaches a sufficiently advanced stage. The point of technique is not to produce leisure but efficiency. Technique continually produces new problems that need to be solved, which invites more technique, which produces more problems etc. There will always be something to do so we will never be free to turn to leisure (leisure broadly understood to include the jobless work you discuss at the end). What is more, because technique shapes what we take to be our ends, it becomes increasingly difficult to think what we might do with our leisure if we were ever freed from the drive to efficiency. One imagines there would be a profound loss of purpose without that drive and leisure would be intolerable.
As a closing aside, I would also invite you to take a look at an article I wrote about Jacques Ellul, technique and AI. The force of it is that we shouldn't be so worried about AI getting "out of control," because technique as a whole is already "out of control." It might be a helpful (and shorter) introduction to technique vs. AI instead of reading Ellul's Technological Society.
Great essay!
Thank you for reading, Vincent!
Excellent essay, thank you. Some things spring to mind (which your essay covers) in the debate concerning technology in the workplace.
(1).People were starved off the land in order to get them into the factories, where they lost a lot of agency, self-determination, and community.
(2).The Capitalist system is one where the people who work in the business don't own it, and those who own it (mostly) don't work in it, thereby setting up an eternal conflict of interest between management (representative of the owners) and workers.
(3).The real issue is about meaningful lives, of which meaningful 'work' (how you spend your day) is a significant proportion. The idea of 'making a living' is a fairly novel (200 years) concept; and a rather distasteful concept given the fullness and richness of what humans are.
(4).I'm all for ending boring soul-destroying jobs; robotics has been doing this for 30+ years. But there needs to be a new economic model so people who have been 'set free' from the 9-5, can live well and develop their 'higher interests'. Isn't that always the promise of the latest technology? But I can't see this happening - to do so (in the current model) would be considered by business as an "added cost", and an unnecessary one at that. Given how costs are off-loaded onto the environment, off-leading people onto the scrap-heap can be well expected.
Techno-optimists are hyping the same old line. Remember in the 1950s how nuclear energy was going to provide "electricity so cheap it won't be worth billing you for it"?
Thank you for reading, Joshua, and well put. Yes, I think it’s important that we remember how earlier predictions have aged (i.e., not well!).
As an aside, I was amused by you querying google whether "we" should anthropomorphize AI. It is indeed curious who the "we" is in the answer.
I might note, however, that the very act of "asking" (and "answering") seem themselves to be anthropomorphic. We "ask" other persons so to "ask" google anything is an implicit (if partial) anthropomorphization. Consider that we would never "ask" an encyclopedia anything. The tendency to anthropomorphize computers/software of all kinds is very difficult to resist.
Patrick,
Thank you for this timely account of AI in the workplace. It is quite striking how explicit the managerial class is about their intentions with AI -- how it is inevitable, how it is employees that must adapt, etc. etc.
Throughout your article, I could not help but think of Jacques Ellul's analysis of technique. It seems to me that Ellul's account of technique is particularly helpful in this coming age of AI and (allegedly) hyper-automation. For Ellul, the chief determining characteristic of modern society is the drive to absolute efficiency. The drive is "absolute" because there is no end beyond efficiency for the sake of which we pursue efficiency. This drive--"technique"--organizes and subordinates all other features of modern society. In his book, The Technological Society, Ellul argues that technique has become "autonomous" insofar as it has made itself the chief determining criterion according to which things are evaluated. In general, we do not ask whether technique is good, but whether a given activity is sufficiently technical (i.e. efficient). If an activity is not technical, it is rejected or branded as immoral.
It seems to me that the current commitment to AI, exemplified so well by the Harvard Business Review and its contributors, can be understood within the broader context of technique. AI is yet another manifestation of the overall commitment to efficiency. AI is inevitable because it promises to be more efficient. Employees must adapt themselves to AI because to do otherwise would be immoral in the overall context of technique.
Another way of putting Ellul's argument is that humans don't use technique, but rather technique uses humans. Humans do not freely choose their ends and then put various techniques to work to achieve those ends. Rather, technique molds and shapes human beings to fit the overall drive to efficiency. Whatever "ends" we think we normally have, these are most often given to us or trained into us by technique. Humans may be ill-suited to life on the factory floor, but the factory floor is efficient; therefore technique adjusts, molds, trains humans to tolerate the factory etc. The Harvard Business Review would seem to be unwittingly rehashing this point in arguing that employees must be adapted to the new fact of AI. AI doesn't serve human ends; humans serve AI's ends.
You touched on a number of these points in your essay, in some cases saying exactly the same thing in slightly different words. I only rehash them to illustrate that you may find reading Ellul a fruitful experience. He gives a realistic and practical account of technological determinism but without recourse to the esotericism in Heidegger that so many dislike. (In fact, I think Ellul refused to ready anything of Heidegger at all, as a matter of principle.)
One implication of Ellul's argument, however, is that efficiency (and/or AI) will not give over to leisure once it reaches a sufficiently advanced stage. The point of technique is not to produce leisure but efficiency. Technique continually produces new problems that need to be solved, which invites more technique, which produces more problems etc. There will always be something to do so we will never be free to turn to leisure (leisure broadly understood to include the jobless work you discuss at the end). What is more, because technique shapes what we take to be our ends, it becomes increasingly difficult to think what we might do with our leisure if we were ever freed from the drive to efficiency. One imagines there would be a profound loss of purpose without that drive and leisure would be intolerable.
As a closing aside, I would also invite you to take a look at an article I wrote about Jacques Ellul, technique and AI. The force of it is that we shouldn't be so worried about AI getting "out of control," because technique as a whole is already "out of control." It might be a helpful (and shorter) introduction to technique vs. AI instead of reading Ellul's Technological Society.
https://www.iih-hermeneutics.org/_files/ugd/f67e0f_dbb5892899f341d19563ab40ec4f8d28.pdf