Discussion about this post

User's avatar
Bryan's avatar

As an aside, I was amused by you querying google whether "we" should anthropomorphize AI. It is indeed curious who the "we" is in the answer.

I might note, however, that the very act of "asking" (and "answering") seem themselves to be anthropomorphic. We "ask" other persons so to "ask" google anything is an implicit (if partial) anthropomorphization. Consider that we would never "ask" an encyclopedia anything. The tendency to anthropomorphize computers/software of all kinds is very difficult to resist.

Expand full comment
Bryan's avatar

Patrick,

Thank you for this timely account of AI in the workplace. It is quite striking how explicit the managerial class is about their intentions with AI -- how it is inevitable, how it is employees that must adapt, etc. etc.

Throughout your article, I could not help but think of Jacques Ellul's analysis of technique. It seems to me that Ellul's account of technique is particularly helpful in this coming age of AI and (allegedly) hyper-automation. For Ellul, the chief determining characteristic of modern society is the drive to absolute efficiency. The drive is "absolute" because there is no end beyond efficiency for the sake of which we pursue efficiency. This drive--"technique"--organizes and subordinates all other features of modern society. In his book, The Technological Society, Ellul argues that technique has become "autonomous" insofar as it has made itself the chief determining criterion according to which things are evaluated. In general, we do not ask whether technique is good, but whether a given activity is sufficiently technical (i.e. efficient). If an activity is not technical, it is rejected or branded as immoral.

It seems to me that the current commitment to AI, exemplified so well by the Harvard Business Review and its contributors, can be understood within the broader context of technique. AI is yet another manifestation of the overall commitment to efficiency. AI is inevitable because it promises to be more efficient. Employees must adapt themselves to AI because to do otherwise would be immoral in the overall context of technique.

Another way of putting Ellul's argument is that humans don't use technique, but rather technique uses humans. Humans do not freely choose their ends and then put various techniques to work to achieve those ends. Rather, technique molds and shapes human beings to fit the overall drive to efficiency. Whatever "ends" we think we normally have, these are most often given to us or trained into us by technique. Humans may be ill-suited to life on the factory floor, but the factory floor is efficient; therefore technique adjusts, molds, trains humans to tolerate the factory etc. The Harvard Business Review would seem to be unwittingly rehashing this point in arguing that employees must be adapted to the new fact of AI. AI doesn't serve human ends; humans serve AI's ends.

You touched on a number of these points in your essay, in some cases saying exactly the same thing in slightly different words. I only rehash them to illustrate that you may find reading Ellul a fruitful experience. He gives a realistic and practical account of technological determinism but without recourse to the esotericism in Heidegger that so many dislike. (In fact, I think Ellul refused to ready anything of Heidegger at all, as a matter of principle.)

One implication of Ellul's argument, however, is that efficiency (and/or AI) will not give over to leisure once it reaches a sufficiently advanced stage. The point of technique is not to produce leisure but efficiency. Technique continually produces new problems that need to be solved, which invites more technique, which produces more problems etc. There will always be something to do so we will never be free to turn to leisure (leisure broadly understood to include the jobless work you discuss at the end). What is more, because technique shapes what we take to be our ends, it becomes increasingly difficult to think what we might do with our leisure if we were ever freed from the drive to efficiency. One imagines there would be a profound loss of purpose without that drive and leisure would be intolerable.

As a closing aside, I would also invite you to take a look at an article I wrote about Jacques Ellul, technique and AI. The force of it is that we shouldn't be so worried about AI getting "out of control," because technique as a whole is already "out of control." It might be a helpful (and shorter) introduction to technique vs. AI instead of reading Ellul's Technological Society.

https://www.iih-hermeneutics.org/_files/ugd/f67e0f_dbb5892899f341d19563ab40ec4f8d28.pdf

Expand full comment
5 more comments...

No posts