Welcome to Ever Not Quite—essays about technology and humanism. This post is my attempt to take stock of the effective altruism movement (EA), which has made major inroads into the popular mind over the past several years, and with which most readers are likely already familiar. The twentieth century Buddhist philosopher Keiji Nishitani once wrote that a consequence of the modern scientific interpretation of nature solely as matter and extension was that “[t]o the self-centered ego of man, the world came to look like so much raw material.” Within the domain of ethics, we could imagine that this perspective might express itself in the form of a totally amoral—or perhaps even openly immoral—form: when all former horizons of moral evaluation cease to make a claim upon our reasoning, we might conclude, no good remains but that of the selfish individual. But something less straightforward—and, to me, much more interesting—has occurred in the case of effective altruism (and this is closer to what Nishitani was getting at): Rather than defending a kind of moral egoism, the movement draws deeply from what the Canadian philosopher Charles Taylor has called a “rich moral background” which lies behind many modern deployments of instrumental reason, and, far from rejecting ethics as a mere hangover from a more naïve epoch, EA understands itself as a supremely selfless and humane initiative that is possible precisely because human-centered values have finally been able to claim their rightful centrality within the field of ethics.
What first drew my interest in effective altruism was what seemed to be its unique influence on the modern tech industry, particularly in Silicon Valley. I wondered about how ethical questions came to be understood so comfortably in the technical and scientific terms in which both EA and Silicon Valley see the world, and to what extent the movement’s close relationship with modern technology was a consequence of their shared intellectual sources. In this essay, I try to provide readers with a sense of the landscape of the effective altruism movement: these specifics are important because a major part of my argument is that the actual words and actions of modern-day effective altruists are remarkably clear expressions of the theoretical sources which the movement shares with the general orientation of modern technology. In both cases, the common theme is the conviction that the world can and must be set right by means of human control.
As I plan to continue to do in future installments, I have included an audio version of the essay to give weary eyes a break from the screen. If you find it interesting, please share this essay with others who might also like to read it; this really helps me to reach new readers.
“This is the bitterest pain for human beings: to know much and control nothing.” Herodotus, Histories, 9.16
I. The Philosopher With Silicon Valley’s Ear
In the fall of 2022, I started to notice articles and podcast episodes beginning to appear with titles like “The Philosopher with Silicon Valley’s Ear”. Their guest, the philosopher and moral entrepreneur William MacAskill, was on a promotional tour for his newest book, What We Owe the Future, which had been released in August of that year. What was unusual about the angle of much of the book’s coverage was that it focused at least as much on those whose decisions it was influencing as it did on the content of MacAskill’s ideas: it turned out that the Scottish MacAskill had found an unlikely following among a small group of tech elite and Silicon Valley executives, and, in them, some of the wealthiest and most powerful people on the planet. Stories in the media were often quick to note that none other than Elon Musk himself had, with characteristic self-regard, tweeted shortly after the book’s release that What We Owe the Future was both “[w]orth reading” and “a close match for my philosophy”.
Much of the coverage seemed to suggest that what made the book compelling stemmed less from the persuasiveness of its arguments or from its forceful case for the inclusion of future people in our moral reckoning than from the mere fact that Musk—then the richest person in the world—accepted them. Beneath Musk’s characterological foibles, his sympathy for ideas like those defended by MacAskill has been said to represent an unseen order that is often concealed by his reckless and inscrutable antics: in Musk’s view, the damage done and the risks tolerated today will be redeemed once they have become contextualized within the larger, long-term good that technological progress can ensure for people living in the future.1 In 2018, after a teenager was killed in a 116 mile per-hour collision in his Tesla Model S, Musk resisted the father’s request for newer models to include safety devices that would limit their maximum speed, on the grounds that too many options would make the vehicles’ operation overly complicated for many users. “I want to make sure that we get this right”, Musk wrote to the young man’s father, James Riley, in an email. “Most good for most number of people.”2 Of his purchase of Twitter in 2022, Musk declared more publicly the laudable intentions behind all the apparent chaos that had come to plague the negotiations, writing in an open letter posted to the platform that “I didn’t do it for the money; I did it for humanity, whom I love.” The implication of MacAskill’s media blitz and Musk’s endorsement of his ideas was clear: If you wanted some insight into the worldview guiding Musk’s many decisions that were exacting an outsized impact on the world—to say nothing of MacAskill’s untold other followers in Silicon Valley—you could do worse than to read What We Owe the Future.
MacAskill’s work, which is in many ways representative of the writings of other philosophers who share his outlook, is among the clearest and most audacious statements of the modern ethic of control I have ever read. What MacAskill’s ideas have in common with the technological ethos whose expression is uniquely centralized in Silicon Valley is that both reflect a deeply modern and historically recent confidence in the ability—perhaps even the obligation—of human endeavor to reshape the world for the better. As the German sociologist and social theorist Hartmut Rosa has written, although the entire history of technology may be interpreted as an attempt to increase “the radius of what is visible, accessible, and attainable to us”, this campaign has in recent decades become newly radicalized.3 Borrowing from the centerpiece of Kantian ethics, Rosa proposes that “the categorical imperative of late-modernity” is intoned in the words: “Always act in such a way that your share of the world is increased”.4 This imperative, according to Rosa, has become the dominant principle guiding not just our own personal lives, in which we have become acquisitive of things and experiences to an unprecedented degree, but also the standard that drives technological change on a global scale, and which motivates the actions of Silicon Valley executives and the numberless tech workers who strive to realize their goals. I’ll come back to Rosa’s argument later, but my point for now is that the imperative to extend the sphere of our reach and control has found expression not only in the technological domain, where it has perhaps always been at play, but also in modern moral philosophy.
II. The Expanding Circle
It is unusual for a collection of ideas spawned in departments of philosophy and on brainy internet forums to burst onto the stage of popular consciousness to the degree that effective altruism—the movement within which MacAskill has been a central figure—has in the last decade or so. Even more unlikely is that so many of its stalwarts should find themselves managing the disbursement of funds valued at around 30-billion dollars as of 2022.5 The superlative aim of effective altruism, or EA, seeks to marry reason and compassion by directing action to perform the most possible good by means of the best available evidence and the soundest possible reasoning. The online community coalesced in Oxford in 2012 when MacAskill, along with the Australian philosopher Toby Ord, founded the Centre for Effective Altruism. The group’s official handbook, a document crowdsourced from among the movement’s many online contributors, warns of the inadequate and partial judgments that result from “moral intuitions which are highly influenced by non-rational mechanisms and emotions”. In short, by insisting upon hard evidence and dispassionate calculations in our decision-making, its methods observe the admonition of Captain Vere in Herman Melville’s novella Billy Budd, a meditation of sorts about the contradictions that accompany the humane meting-out of justice: “Let not warm hearts betray heads that should be cool”.
The painstakingly calculated beneficence of EA’s moral strategy is what sets it apart from other progressive programs for the amelioration of human welfare. In recent years, its guiding light has led to the performance of countless almost superhuman feats of selflessness. As they have appeared in the media, these stories range in tone and focus from the deeply personal to the more objective posture of philosophical argumentation amply fortified with data.6 Many publications promulgated by the Centre for Effective Altruism prominently feature charts and graphs that provide visual testament to the soundness and urgency of its inducements. In 2017, the Vox journalist and EA champion Dylan Matthews wrote a moving article about his experience donating one of his kidneys to a complete stranger, an act motivated—and, according to Matthews’ views, necessitated—by the inexorable logic of effective altruism. Whether these efforts are narrated by a conscience-driven first-person or arrayed coolly on actuarial spreadsheets, the intention is always the same: to use the world’s limited resources to do the most good for the most people.
But just which course of action, among the countless branching possibilities in the would-be altruist’s decision-tree, can be expected to yield the greatest good? This question sets into motion a cascade of sub-questions which need to be answered first: on account of what quality or capacity do beings possess moral standing in the first place? How do we weigh their competing interests? How would we even define and measure wellbeing? How do we best account for our uncertainty about what will follow from any given action? And what are the limits, if any, to the time-horizon within which we should make our calculations? These meta-ethical questions, which would not be out of place in any introductory college ethics course, are subject to careful consideration and fierce online debate within the effective altruism community and have also received extensive academic treatment by effective altruists, most notably at Oxford University, MacAskill’s home institution. The fact that the answers given to these questions tend to speak mathematically, in terms of quantity, formula, and computation—often turning on assigned probabilities, or “P-values”, for uncertain events—reveals a cast of mind that is comfortable with a quantitative interpretation of reality and with the abstract and non-narrative language of the computer code that rules the tech world within which EA has been so influential.
Effective altruism—the name was solidified in characteristically deliberative fashion by a vote taken by the online community in 2011—had its beginnings in the internet forums of the early 2000s on sites like the rationalist community forum LessWrong. But its philosophical roots reach further back to the work of Peter Singer and other apostles of the utilitarian tradition of the last half-century. Utilitarianism: a form of ethical consequentialism that holds that an action or policy is good to the extent that its consequences maximize, or can be expected to maximize, a given utility—typically defined as some form of happiness or wellbeing, be it human, animal or ecological. Along with his 1975 book Animal Liberation, which has by now motivated and shaped the animal welfare movement for generations, Singer is perhaps best known to the popular imagination for conjuring a particular scene of moral urgency, first in his 1971 article “Famine, Affluence, and Morality”, as well as in many writings and lectures since: You would not hesitate, Singer expects, to save a child drowning in a pond right in front of you—even at the cost of ruining the expensive new suit you happen to be wearing. Singer argues that the situation for the citizens of rich countries relative to the global poor, despite the greater distances involved, is identical in all morally relevant respects; there is, therefore, no moral justification for treating those in need who happen to be nearby with greater favor than those who are far away, nor for treating those who are distant as worthy of any less moral regard.
Many within the EA community, including MacAskill himself, have rejected the idea that effective altruism is nothing more than applied utilitarianism—the word barely appears in What We Owe the Future—emphasizing the influence of other moral outlooks on the movement. Rather, they are quick to point out that EA often assumes a more compromising stance in circumstances in which the demands of an unalloyed utilitarianism would involve objectionable actions. Writing with fellow EA Benjamin Todd, MacAskill rules out, for example, “[c]ommiting fraud or stealing money” in order to donate it to charity (effectively distancing themselves from the chicanery of Sam Bankman-Fried, the self-described EA and cryptocurrency entrepreneur whose unscrupulous financial dealings tarnished the movement’s public reputation after his arrest on fraud charges in December of 2022). These disavowals rest on the fact that many members of the broader EA community also recognize goods like respect for individual rights and more general constraints against doing harm, in addition to the sheer sum of wellbeing that a given action produces. But while its sharpest edges have been smoothed by the humanizing influence of moral intuition, it must be said that EA’s approach to ethics remains overwhelmingly calculative and consequentialist.
Singer’s other famous image, and one which lent a title to another of his influential books, is what he calls the “expanding circle”, a figure which captures the centuries-long advance of a more universal and inclusive moral domain. The increasing capacity of the circle, its circumference marking the boundary between those who are extended moral consideration and those who are not, has grown over the course of history as societies have begun to recognize the moral standing of groups they had previously ignored, and Singer argues that this process should be extended to the point that the circle embraces most animals.7 What is crucial in this development, apart from the obvious generosity of its moral scope, is the way the moral landscape is increasingly conceived from a disembedded and universal standpoint which tends to de-emphasize moral actors and their distinctive perspective on the choices they face, foregrounding instead a more homogeneous and universal definition of moral value.
III. We Can Make Their Lives Go Better
This past heady decade within the EA movement saw the dramatic rise of numerous organizations focusing on optimizing charitable giving and career selection—among them GiveWell, Open Philanthropy, Giving What We Can, and 80,000 Hours—which were first founded and subsequently staffed by EA’s often still-youthful movement elders. During that time, and sporting the credibility of an Oxford philosophy professor, MacAskill, still only in his mid-30s, emerged among the movement’s logic-chopping think-fluencers as perhaps its most influential and articulate voice. What We Owe the Future is penned in a simple prose that rejects the rarified jargon of off-puttingly abstruse theory, conveying its gospel in an idiom that seems calculated to achieve most effectively its purpose of communicating to a popular audience. Speaking to the young with an eye toward ensuring the greatest impact on the future, MacAskill’s practically-oriented closing chapter offers career advice that employs the same moral triage that EA itself applies to the world’s moral questions.
Because the existential risks that most threaten the world’s enduring wellbeing—which, because we are still here to fret over them, endanger us by definition from the direction of the future—out of effective altruism a new moral dimension, and a new word, is born: Longtermism. “[I]n the long run”, John Maynard Keynes once famously quipped, “we shall all be dead”. But for MacAskill and the votaries of longtermism, Peter Singer’s expanding moral circle discovers a temporal element, extending ethical consideration to what MacAskill argues is the most morally overlooked—and also the largest—group of all: the denizens of future societies whose numbers could be many, and, not yet alive to lobby for their interests, have no voice in the deliberations of today that will have outsized impacts on the quality and conditions of their lives in the future. What We Owe the Future is a comprehensive statement—too detailed and restrained to be called a manifesto—of the aims and methods of the philosophy of longtermism. More methodical than Toby Ord’s The Precipice (2020), a book which also considers ethical questions from the standpoint of humanity’s long-term prospects, MacAskill’s central argument is captured in the opening lines of the book’s first chapter, which articulate the quintessential longtermist credo: “Future people count. There could be a lot of them. We can make their lives go better.”8
Implicit in MacAskill’s appeal to consider the lives of future people is a recognition of the augmented scope of human action by means of modern technology, which has provided would-be do-gooders with the power to intervene in natural and historical processes to an unprecedented degree, and given us the power to bestow extraordinary good—but also to inflict extraordinary harm—upon future generations. Existential risks to humanity like climate catastrophes and rogue AI threaten to close off a future in which, by MacAskill’s calculations, trillions of happy human lives might be lived as our species proliferates across the universe, migrating when necessary to new solar systems in order to stay ahead of the eventual collapse of each star. As Alexander Zaitchik wryly put it in The New Republic, again riffing on Kant, the philosophy of longtermism “posits that one’s highest ethical duty in the present is to increase the odds, however slightly, of humanity’s long-term survival and the colonization of the Virgo supercluster of galaxies by our distant descendants.” And this is not far off; but although MacAskill and the broader EA community worry about the dangers presented by certain technologies, throughout the book MacAskill reminds the reader of the preeminent danger of scientific and technological stagnation, a threat which would represent a relinquishment of the mastery over our fate which modern science and technology have allocated to human hands. Despite the dangers we court in doing so, MacAskill repeatedly insists, “we need to ensure that the engine of technological progress keeps running.”9
While What We Owe the Future might be described as an articulation of the moral sourcecode that steers much of Silicon Valley as it gazes into the future, MacAskill’s philosophy, of course, did not create the modern tech industry for which he is often said to speak. Their affinity stems, rather, from their partaking of common sources. But the often fawning media coverage that MacAskill and his book have received has tended never seriously to explore the roots of the relationship between effective altruism and the modern tech industry, whose own media coverage in recent years has turned decidedly less sunny than it was a decade ago. But the naturally cozy intertwining of the vanguard of modern technology and these descendants of the utilitarian tradition invites deeper probing about how it came to be that even ethics—the family of philosophical questions that broadly inquire about how we should live and act in the world—came to appear to so many influential technologists and prominent philosophers as a technical problem that could be solved, or even conceived, in terms of quantitative abstraction. Simply put, how did ethics come to speak in the language of science and technology? While EA’s quantitative and evidence-based approach to ethics might strike us as an inevitable or even necessary development in a milieu in which STEM disciplines are held in such high regard, the essential strangeness of the movement’s core premises is attested by the fact that it took more than two millennia of serious moral reasoning before anything resembling modern utilitarianism first took shape in Europe. If its conclusions are as sound as they might seem, why did it take so long for this way of thinking to begin to appear so obvious to so many? Beneath EA’s distinctly intuitive idea that the moral life should be one that brings the most possible good into the world is another assumption, far less palatable: that ethics is, at bottom, a technical and calculable problem.
IV. A Homogeneous Universe of Rational Calculation
I won’t be able to answer all of these questions in full, but a few summary observations will throw light upon the movement’s lofty aspirations by noting some of the most important philosophical upheavals that occurred at the beginning of the modern era. As Hartmut Rosa has argued, “[t]he cultural achievement of modernity is that it has nearly perfected human beings’ ability to establish a certain distance from the world while at the same time bringing it within our manipulative reach”.10 That is, we “remove” ourselves theoretically from the world into a position of speculative detachment, but in a coordinated maneuver, we also reach back into the world and manipulate it for our own purposes. Although both of these developments are indeed enduring possibilities for the way we conceive and relate to the world, one of the unique departures of the modern age is the intensification of this “aggressive-distancing relationship” with things.11 This dual tendency—both elements of which were crucial for the establishment of the modern ethic of control I’m interested in here—began to be formulated in Europe in the seventeenth-century.
The foundations for EA’s combination of a proactive concern for human welfare and the calculative mindset of modern science were laid early: it was in 1605, just as the spirit of modern science was beginning to stir, that its great prophet, Francis Bacon, famously wrote that “[s]cience discovery should be driven not just by the quest for intellectual enlightenment, but also for the relief of man’s estate”.12 This maxim, which sounded a tune for so much of what would follow, was an early articulation of what the Canadian philosopher Charles Taylor has more recently called the “rich moral background” of modernity’s many applications of instrumental rationality, which have guided the fashioning of any number of modern technologies designed to make life easier, better, and longer. The ethical mission that has driven their development has, according to Taylor, not always been appreciated by critics of modern technology who, often overlooking the genuine good it has done, have instead tended to bemoan its most baleful consequences.13 Contrary to its detractors, Bacon urges that instrumental reason can be a faithful steward of human wellbeing, and it is this characteristically British optimism about the promises of technology that has now fed and sustained the utilitarian tradition for centuries. Bacon’s vision for the humanizing influence of scientific and technological progress has ever since remained a feature of every debate about the direction of scientific research and technological innovation, and even its most indefensible manifestations have often argued that their ultimate aim had been to bring about human wellbeing in one form or another.
From the start, then, there was more at work in the development of modern technology than the basest libido dominandi—the lust for domination—which it is sometimes accused of embracing as its supreme mandate.14 Both the utilitarian tradition and its effective altruist offspring reject the idea that a scientific understanding of the world can only serve the self-interested schemes of the powerful. It is, in fact, just the other way around: as MacAskill stresses throughout What We Owe the Future, our capacity to do the most possible good rests squarely on continued scientific and technological progress being put to work for the benefit of all. The consequence of this conviction is perhaps the grand bargain of modernity itself: a more humane world in exchange for a less human one. Technology and the benevolent number-crunching that guides its deployment helps us to diminish life’s pain and toil, but in order to do so, its centralizing and administrative tendencies will arrange us into a large-scale techno-bureaucratic system that is governed by algorithms which view us not as human beings, but as variables within a formula devised to maximize some utility.
The words Elon Musk used to explain himself to James Riley, whose son was killed in his Tesla—“most good for most number of people”—paraphrased, probably unwittingly, the words of the eighteenth-century Irish proto-utilitarian philosopher Francis Hutcheson, who, in a 1725 essay, coined what would become the famed utilitarian dictum: “that Action is best, which procures the greatest Happiness for the greatest Numbers”.15 In a section of that essay entitled “How we compute the Morality of Actions in our Sense of them”, Hutcheson writes: “To find a universal Canon to compute the Morality of any Actions, with all their Circumstances, when we judge of the Actions done by our selves, or by others, we must observe the following Propositions, or Axioms”.16 Hutcheson proceeds to set out a series of daunting calculations which, allegedly, follow logically from these axioms to quantify happiness mathematically. Here, the variable B represents Benevolence, A is Ability, S stands for Self-love, I for Interest, and M for the Moment of Good (and before quoting Hutcheson at any greater length, I should mention that my point here does not rest on having the reader parse these calculations in any detail):
[T]he entire Motive to good Actions is not always Benevolence alone; or Motive to Evil, Malice alone; […] but in most Actions we must look upon Self-Love as another Force, sometimes conspiring with Benevolence, and assisting it, when we are excited by Views of private Interest, as well as publick Good; and sometimes opposing Benevolence, when the good Action is any way difficult or painful in the Performance, or detrimental in its Consequences to the Agent. In the former Case, M = (B + S) × A = BA + SA; and therefore BA = M ̶ S = M ̶ I, and B = (M ̶ I)/A. In the latter Case, M = (B ̶ S) × A = BA ̶ SA; therefore BA = M + SA = M + I, and B = (M + I)/A.
The audacity of Hutcheson’s crude attempt to calculate plausible solutions to ethical problems does indeed exaggerate MacAskill’s own quantification of happiness over an individual’s lifetime on a scale of -100 through +100, with “0” representing a neutral life, “a life”, MacAskill remarks “that is neither good nor bad from the perspective of the person living it”.17 Perhaps the most analytically demanding chapter of What We Owe the Future is devoted to the subject of population ethics, a subdiscipline of ethics which employs a set of calculations not entirely unlike Hutcheson’s to compute the long-term effects of actions that affect future lives and the level of their anticipated wellbeing. While MacAskill is not naïve enough to follow Hutcheson’s zeal for computation down to its last detail, he does display an extraordinary confidence in the possibility that we can step outside of any given moral situation and assess it externally and objectively—a maneuver which allows us to measure and compare totally unique situations across infinite variety and nuance. (Indeed, elsewhere in the book, we hear that hunter-gatherer societies often enjoy, in the estimation of some anthropologists, “high levels of community”—an incoherent metric that does less to capture a genuine truth about human togetherness than it does to contort it into nonsense by compulsory measurement.18) In any case, my point here is simply that this kind of calculation, as well as the broader ethic of humanitarian control that it rests upon, is possible only to the degree that we first hold the world before us as an object of detached contemplation.
In December of 2019, replying on Twitter to another user’s post about how the international community might deal most effectively with greenhouse gas emissions, Musk wrote, true to formula: “We should take the set of actions that maximize total public happiness!” And this made sense: Musk has inherited this Enlightenment conviction that outcomes, even of the most complex processes, are essentially manageable, and that policymakers must do so with the aim of bringing about the most possible human happiness. Hutcheson’s quantitative appraisals of moral good anticipated by a generation Jeremy Bentham’s more famous “felicific calculus”, which likewise purports to evaluate scientifically the amount of pleasure and pain—synonyms, in Bentham’s vocabulary, for good and evil—any given action could be expected to cause. Apart from his role as one of the nineteenth century’s chief framers of classical utilitarianism, a status which he shares with John Stuart Mill who wrote later in that century, Bentham is perhaps most remembered today for developing the techniques of panopticism which are so often invoked as a metaphor for our lives in digital spaces. But in contrast to his part in theorizing a regime of optical surveillance, Bentham is still also celebrated as a progressive social reformer and meliorist of human suffering by means of social and legal reforms. Bentham and other nineteenth century utilitarians were motivated in this by the many obvious evils accompanying the technologies that were then driving the industrial revolution in the British manufacturing cities and towns of the period. But, rather than decrying the “dark Satanic Mills” of his Londonite contemporary, William Blake, Bentham largely embraced industrialism and the technologies making it possible for their contributions to the the sum of total public happiness. Indeed, it is no coincidence that utilitarianism partakes etymologically of the very principle which it shares with all technology: the Latin, utilitas, which names the usefulness or advantage which a given technique supplies its user, is also crowned the highest moral good by utilitarianism, and it was Bentham who probably first used the word with its moral inflection.
Throughout the historically still-recent development of a technologically-oriented ethical consequentialism, the modern attitude of “aggressive-distancing” to which Rosa referred has remained an abiding theme within the field of ethics, from its earlier formulations in Hutcheson and later in Bentham and Mill all the way to the modern exponents of effective altruism in MacAskill and Musk. All of these contributors cast morality in terms that can be described objectively and accessed technologically through the appropriate combination of theory and technique. Theoretically speaking, this was first made possible, perhaps even inescapable, by a prior conception of moral space as a continuous and undifferentiated expanse of potential utility—what Charles Taylor has elsewhere called a “homogeneous universe of rational calculation”.19 And the prevailing uniformity that characterizes this moral universe, which of course later provided the metaphysical background for Peter Singer’s expanding circle, had itself been preceded by groundwork laid in the seventeenth century—not this time by Bacon, but by Descartes, for whom physical space was understood to extend infinitely and uniformly in all directions.
And here it becomes crucial to again stress just how peculiar—indeed, how utterly confounding—this all looks from the perspective of other moral cosmologies. As intuitive as much of this inevitably sounds to the twenty-first century ear, ancient moral thought—whether we look for examples to classical Confucianism, or, within a European context, to ancient Greek and Roman virtue ethics, or even to the hedonic tilt of ancient Epicureanism—generally does not recognize the premises, much less the conclusions, of a moral theory which takes for granted a distant, yet aggressive anthropology that stands out against a homogenous cosmic backdrop. Rather, because these traditions assume an entirely different metaphysics, their notion of ethics—and, by extension, the role of technology in furthering its aims—looks quite different. The concept I’m reaching for here has thankfully already been developed by the philosopher Yuk Hui, who has introduced the term “cosmotechnics” to name “[t]he unification between the cosmic order and the moral order through technical activities”.20 Every moral framework, Hui argues, presupposes some metaphysical background which directs and makes sense of its normative vision for human action and the technologies which serve it.21 The modern cosmology, which places the human in a standpoint that is aloof with respect to what has become from its point of view a kind of theater of moral significance, renders the world both visible and objective to a disembedded cognition, and provides the theoretical basis for our reach and control over events occurring within that moral space by enlisting technological procedures to achieve its ends. More simply put, once we have assumed for ourselves a distanced relationship with the world, it then turns out that the world can be set right, corrected, and controlled by reaching back into it technologically.
We should appreciate the selflessness of EA’s charitable programs, and consider seriously many of its proposals that would help charities to do more good in the world. But we cannot allow it to claim the final word in ethical matters, precisely because of the inadequacy of its underlying cosmo-technical premises: namely, the way in which it locates us simultaneously at the center of things, but also, oddly, nowhere. That is, the attitude of aggressive distancing places the things of the world, as it were, before us: uniform, prostrate, and available to no other authority than our own technical administration. Even the capacity of our own bodies to discern pleasure from pain is thus separated from our calculating gaze, which is no longer situated within the world, but abstracted out of it.22 Place has been exchanged for position, and no particular position within the homogeneous moral universe any longer makes a real claim upon us—no planet, not even any particular solar system is ever quite felt to be home—notwithstanding the formidable control we have achieved over this universe to which we no longer really belong.
V. The Uncontrollability of the World
To many readers, this will be a familiar set of charges directed against the modern world, and effective altruism, which epitomizes so many of modernity’s distinctive characteristics, has been no stranger to criticism. Since its emergence, it has fended off detractors from diverse quarters: some are troubled by its insistence on giving the far and the near equal priority in our ethical considerations; others are unpersuaded by its allegedly objective standards for measuring morality, which seem to overlook the cultivation of the virtue and character of the moral agent; it has also been observed that, in the pursuit of a program that posits an ostensibly universal stance on moral issues, the movement has instead perversely adopted the narrow perspective of a tiny group of highly-educated Western elites;23 and in the twentieth century, the political theorist Hannah Arendt pointed out that the premise underlying utilitarianism—and at least some of this rubs off onto effective altruism—holds that what we have in common is not a shared world per se, but merely a common inner capacity to experience pleasure and pain.24 That is, what we share is not the world that exists between us, separating and situating us relative to one another, but rather something decidedly unworldly, hidden deep within each of us. But to my mind, the most profound oversight of the emerging elite moral consensus25 which EA can be said to represent, is precisely what underlies its enthusiasm for modern technology: namely, as Rosa has pointed out, its aloofness from the world, which it views as an empty vessel for manipulation, control, and ultimately for perfection by means of its own moral designs and imperatives.
In The Uncontrollability of the World, which first appeared in English translation in 2020, Rosa argues that, to the extent that we are successful in rendering the world visible, reachable, manageable, and useful to us, something quite unexpected happens: what he calls the “paradoxical flipside” of our very success is that the world “mysteriously seems to elude us or to close itself off from us. It withdraws from us, becoming mute and unreadable”. To precisely the extent that we are successful in securing control over things, we discover that we ourselves cease to be reachable by them, and that our relationship with the world is reduced to a commerce that moves only in a single direction. The quintessentially modern pathology of alienation and existential anxiety emerges, not despite, but because of our very success in making the universe scientifically visible and technologically controllable. “Everything out there is lifeless and dead, we tell ourselves, and everything in me is mute and cold, and no expansion of our share of the world can liberate us from this condition”.26
Rosa is not offering an alternative ethical theory, exactly, to compete with that proffered by philosophies like effective altruism and longtermism. But he does develop a theory about the phenomenology of the moral universe which precedes the moral reasoning that consumes the pages of What We Owe the Future and countless similar books and internet forums. Rosa’s work points in the direction of what has been called a “eudaimonistic turn” within the field of sociology, which has traditionally hewed closely to the standards of purely descriptive social science, opting instead to explore the features of what he calls a successful relationship with the world.27 The German word “Unverfügbarkeit”, which is not in common use even in its native tongue, is here uncomfortably translated as “uncontrollability”, and it communicates a sense of “non-engineerability”, or perhaps even “elusiveness”. In short, the word is intended to evoke the way in which our connection to the world is paradoxically undermined by the very success of our attempts to reach and to control it. It does not suggest that things cannot be controlled, or that engineering as such is impossible. Quite the opposite: we plainly possess an extraordinary power over things, but this power harbors within itself the seeds of its own futility—that the mastery it confers comes at the cost of a flatter, more distant, and one-directional experience of the world. In Rosa’s interpretation, we have progressively substituted reachability—the condition of being in dialogue with things, able both to affect and to be affected by them—with controllability.28 And this is what he calls modernity’s “essential contradiction”: the control that scientific and technological proficiency has helped us to procure is accompanied by a corresponding loss of what he calls resonance.29
“The basic mode of vibrant human existence”, Rosa writes, “consists not in exerting control over things but in resonating with them, making them respond to us […] and responding to them in turn”. A book, poem, or a piece of music speaks to us precisely to the extent that we have not yet fully assimilated it, that something about it eludes us and continues to be a source of fascination for us; similarly, a relationship with another person will continue to harbor new possibilities only to the extent that we have not become convinced that we have fully grasped or understood everything about them—so long, that is, as we perceive hidden depths within the other person that have not yet revealed themselves to us.30 When we lose our awareness of ourselves as sensitive beings with the capacity to be affected by experiences and encounters that are not fundamentally at our beck and call, the world ceases to have anything left to say to us, and we are ultimately left feeling cold and unfulfilled. This is why, in Rosa’s view, resonance is at odds with the relentless escalation and optimization that characterizes modernity: our pursuit of mastery tends to cut directly against any possibility of experiencing the world as something that acts upon us, and to which we respond.31
Crucially, just as resonating with things which we do not completely control is a necessary feature of flourishing life as individuals, a culture at large, to be truly dynamic and alive, and not simply sterile and dead, also needs to negotiate perpetually both with itself and with forces that exceed its comprehensive grasp. At the level of individual experience, we might notice some similarity between Rosa’s account of resonance and what MacAskill seems to have in mind when he writes of individual happiness or wellbeing, which he says might involve “joyful experiences, or meaningful accomplishments, or the pursuit of knowledge and beauty, or the satisfaction of one’s preferences, or all of these things combined”.32 But here is the contradiction at the heart of effective altruism and longtermism: these movements hope to maximize the number of future lives that are marked by freedom and flourishing—features that distinctly resemble Rosa’s “resonance”—yet they propose to do so within a context of total scientific and technological oversight of human choices and civilizational destiny.
MacAskill warns at one point of the value lock-in that would result if an “AI-assisted perpetual global totalitarianism” manages to take hold sometime in the future.33 His next sentence is one I have already quoted: “[A]t the same time, we need to ensure that the engine of technological progress keeps running.”34 But MacAskill continues:
If we are to meet these challenges and ensure that civilisation at the end of this century is pointed in a positive direction, then a movement of morally motivated people, concerned about the whole scope of the future, is a necessity, not an optional extra.
He seems to think, in other words, that unlike a global totalitarianism perpetually overseen by artificial intelligence, longtermist values alone can in the long run preserve human freedom and the possibilities it requires that we leave open. But he is untroubled by the idea that, just as we must resist the hegemonic decrees of a totalitarian AI, the need to supervise and control every factor that may conceivably affect our prospects for maximizing these values in the long term is just another, albeit softer, form of authoritarianism. In the book’s first chapter, MacAskill articulates what he thinks is the pressing issue of our time, and which will determine the shape of human social organization for the long-term future, with striking candor: “society has not yet settled down into a stable state, and we are able to influence which stable state we end up in.”35
There is no way around the fact that the dictates of longtermism, which MacAskill hopes will set the terms for civilization in the long run, are in direct conflict with a vision of human welling that is defined by a receptive, and not solely assertive, relationship with the world. Such a relationship requires something that effective altruists forsake at the outset: an open and dialogical exchange between human beings and the world as we encounter it, rather than a regime designed only to further scientific visibility and technological control over things. On the level of individual life, MacAskill can justly deny this charge: he is happy to accept that the wellbeing of each of us rests to some degree on the same responsive exchange that is central to Rosa’s concept of resonance. But against the broadest temporal frame, which is the signature horizon of longtermism, humanity at large must foreclose the possibility of receptivity, and trade it for the aggressive stance which it has deemed necessary for long-term value-maximization.
The ancient Greek tragedians understood that, as Robert D. Kaplan has recently written, there is something “irremediably wrong with the world”—namely that, in human affairs, one good must often triumph over another good.36 They recognized, in other words, that the world is constituted in such a way that it does not admit of whatever absolution can be devised by the calculative and objectifying capacities of the human mind; we must often choose—to the extent that we have a choice at all—which deeply flawed path is preferable to its alternatives. And here, it seems, we must choose between, on the one hand, our power to direct the course of future history in order to maximize the wellbeing of the greatest number of our most distant descendants, and, on the other, the possibility of the kind of resonant relationship with the world that we forfeit when we insist upon an outlook of aggressive distance. Only in the latter, it seems, can we experience ourselves not simply at odds with a sterile, homogenous, and fundamentally recalcitrant universe, and discover a genuine stance of receptivity and openness that could perhaps begin to reconstitute our lost sense of place. As is so often the case, whether we notice it or not, these two competing goods have come to exist in direct conflict with each other; What We Owe the Future is an unusually articulate statement of confidence in our own ability not to have to make such choices, but rather to dictate the terms on which history will unfold, and human life will be experienced. This cannot work.
To some readers, this will sound like a fatalistic abdication of the responsibility which we long ago accepted when modern science began its campaign for the relief of the human estate. To be clear, I am not questioning the importance of effective altruism’s priorities like global health and pandemic prevention, climate change, or AI safety; I admire the work many good people have done, and are doing, to advance the cause of charity and human wellbeing, and I think these global problems warrant the attention the movement says they do. This places its critics who, like me, worry about some of the theoretical assumptions that have allowed the movement’s most number-crunching aspects to come to appear so plausible into something of a bind: we can point out the contingency and futility of the modern scientific world-picture only at the risk of seeming to dismiss the movement’s priorities—or, worse, to diminish the genuine good the community has done in the pursuit of worthy goals. But in spite of these theoretical reservations, there is a sense in which, to adapt a once-famous line uttered reluctantly by Milton Freidman of the then-ascendant Keynesian economic theory—we are all longtermists now. By this I mean that the technologies which we already possess have placed into our hands certain responsibilities that require the very long-term thinking which the longtermist movement urges. Perhaps the extraordinary self-confidence and urgency that animates longtermism is a justifiable, or even inevitable, response to the technological realities of our moment. But we should also remember that an epoch in which a philosophy like longtermism can present itself as the only credible approach to moral questions is on a dubious trajectory indeed. During the Industrial Revolution, William Blake famously wrote of the “mind-forg’d manacles” that confine us when we cease to be able to conceive of alternative ways of being in the world, and a single vision has alone come to lay claim to the imagination.37
The implicit optimism of a movement like longtermism sits uncomfortably within the rather downcast and foreboding cultural mood which has descended upon much of the world in the past several years. Many of the triumphal pieties of more secure post-Cold War decades, which once seemed to illuminate a future in which the development of a certain consensus could bring an end to the ideological struggles that constitute history itself, have come under attack from many sides. When What We Owe the Future first appeared in 2022, the international order had already been made to shudder in a way it hadn’t in decades. And while the social and political fallout of social media and the fragmentation that is abetted by digital media of all kinds were by then well-known, the bewildering sally of artificial intelligence into the economy, education, and, before long, politics would soon unsettle many of our lingering assumptions about the liberating and enlightening promises of techno-rational progressivism. Longtermism doesn’t claim to hold the solution to every problem—after all, we don’t yet have all the best data with which to tame the world’s innumerable stubborn contingencies. But the confidence that human life, society, and history not only can be navigated by formulas that seek to maximize utility, but that doing so amounts to a kind of moral ideal, is nothing less than an astonishing development, and we should pause to remember the momentous trade that is making it possible.
This view was explored last year in a piece by Christopher Cox in The New York Times Magazine, which is a major source for much of this paragraph.
Hartmut Rosa, The Uncontrollability of the World, pg. 11
Ibid.
In the introduction to What We Owe the Future, MacAskill claims to have spent two solid years fact-checking the book, so committed is he to the rigorous deployment of facts and data. See MacAskill, What We Owe the Future, pg. 7.
Singer writes in his 1981 book The Expanding Circle: “The only justifiable stopping place for the expansion of altruism is the point at which all whose welfare can be affected by our actions are included within the circle of altruism. This means that all beings with the capacity to feel pleasure or pain should be included; we can improve their welfare by increasing their pleasures and diminishing their pains. The expansion of the moral circle should therefore be pushed out until it includes most animals.” Peter Singer, The Expanding Circle, pg. 120.
MacAskill, What We Owe the Future, 9. In an extraordinary passage, MacAskill later speculates—before stopping himself—about the possibility of sustaining technological progress by means of a regime of cloning and social conditioning. Ultimately, this idea is rejected, however, largely on the grounds of regressive social attitudes within society which would constitute an unfortunate barrier to realizing this. MacAskill writes: “If scientists with Einstein-level research abilities were cloned and trained from an early age, or if human beings were genetically engineered to have greater research abilities, this could compensate for having fewer people overall and thereby sustain technological progress. But in addition to questions of technological feasibility, there will likely be regulatory prohibitions and strong social norms against the use of this technology—especially against the most radical forms, which would be necessary to multiply effective research efforts manyfold.” MacAskill, What We Owe the Future, 156
Ibid., pg. 244
Rosa, The Uncontrollability of the World, 30 (italics mine)
Ibid. This point touches on a theme I wrote about in quite a different context back in December in a post which can be found here.
Bacon, The Advancement of Learning. Indeed, effective altruism shares not just the goals but also the methods of science. As the “EA Handbook” puts it: “Effective altruism can be compared to the scientific method. Science is the use of evidence and reason in search of truth—even if the results are unintuitive or run counter to tradition. Effective altruism is the use of evidence and reason in search of the best ways of doing good.”
Charles Taylor, The Malaise of Modernity, pg. 105
The term “libido dominandi” comes from the original Latin of Augustine’s short preface to The City of God, where, referring to the Roman empire, he remarks that “I cannot refrain from speaking about the city of this world, a city which aims at domination, which holds nations in enslavement, but is itself dominated by that very lust for domination [libido dominandi]”.
Francis Hutcheson, “An Inquiry Concerning the Original of Our Ideas of Virtue or Moral Good”, pg. 125
Ibid., 128
MacAskill, What We Owe the Future, pg. 170
Ibid., 206
Charles Taylor, Sources of the Self, pg. 83
Yuk Hui, The Question Concerning Technology in China, pg. 19
Ibid., 19. Hui writes that “[s]cientific and technical thinking emerges under cosmological conditions that are expressed in the relations between humans and their milieus, which are never static” (pg. 18). In the cosmo-technical arrangement I am describing here—and in any such arrangement—there is a profound connection between the way nature has come to be conceived in the modern era and the shape that ethics has assumed. And the role of technology is to facilitate our ability to take charge of the contingencies of nature and transform them into purposeful actions that concretize moral ideals.
This idea played a major part in my last post about scientific and technical expertise. You can find a link to the most relevant portion of that essay here.
Much as I have tried to argue here, Arendt wrote: “Moreover, Bentham’s basic assumption that what all men have in common is not the world but the sameness of their own nature, which manifests itself in the sameness of calculation and the sameness of being affected by pain and pleasure, is directly derived from the earlier philosophers of the modern age.” Hannah Arendt, The Human Condition, pg. 309
In addition to Elon Musk’s endorsement, a veritable cavalcade of contemporary thought-leaders have blurbed the book, including Tyler Cowen, Sam Harris, Ezra Klein, Tim Urban, and more.
Rosa, The Uncontrollability of the World, pg. 34
See Rosa, Social Acceleration: A New Theory of Modernity, “Translator’s Introduction” pgs. xxv-xxx. Also Rosa, Resonance: A Sociology of Our Relationship to the World, “”, pgs. 17-31.
Rosa, The Uncontrollability of the World, pg. 57-8
Ibid., pg. 39
Ibid., pg. 31
Ibid., pg. 37
MacAskill, What We Owe the Future, pg. 170
Ibid., pg. 244
Ibid.
Ibid., pg. 28 (italics mine)
Robert D. Kaplan, The Tragic Mind, pg. 7. As Kaplan writes: “Tragedy is about bravely trying to fix the world, but only within limits, while knowing that many struggles are poignant and tragic precisely because they are futile.” (Kaplan, The Tragic Mind, pg. 8)
Cf. William Barrett, The Illusion of Technique: A Search for Meaning in a Technological Civilization, pg. xv-xvi
The "utilitarian" impulse motivating these world-improvers (like their predecessors) depends on the assumption of a single dimension of pre-determined ends, which can express all human goals.
There's no possibility of true disagreement or conflict between values, provided you've got the cognitive horsepower to collect the data and crunch the numbers. That's deeply implausible.
Worse, while some of their ideals and aspirations are worth thinking about, the path that the "Effective Altruist" world-improvers follow will lead to a colorless, sterile future empty of the values that make life worth caring about.
Hi Jordan, I'm repeating here the comment you'll have seen me post on another discussion site we're both on.
Having now read it, I liked it and (to paraphrase my favourite comedian, Stewart Lee, I agreed the fuck out of it). However I hanker for someone to take me as concisely and as powerfully as possible through the process whereby EA instrumentalises — or perhaps the right term is ‘objectifies’ or ‘managerialises’ ethics. I’d love to do it myself of course, but I haven’t and I’m sure it could be done better than I could do it.
I was interested to read the German philosopher you quoted, but somehow it made what you were writing a little more derivative. There’s something else that occurs to me as I write this which is that with EA — and positivism generally — their philosophical underpinnings are obsessively oriented around the obvious. In this case that
1) If you’re trying to do good, you should do as much as you can as effectively and efficiently as you can.
2) that whether or not one posits everyone weighing exactly equally in our moral thinking, moral questions can be expressed in equations. Total G = xG + yG. (Total good equals everyone’s good all added up.)
Broadly speaking, these conclusions are unarguable. I accept them, not just intellectually, but in my practical life. To borrow from an old economist, Alfred Marshall I try to have a warm heart and a cool head.
Yet I have to say that this kind of thing being paraded as deep thought both bores me and makes me suspicious that I’m really dealing with serious thinkers — for all their academic credentials. Like interminable books on leadership, particularly business leadership, there are a few precepts, and the rest is repetition.
And there’s an irony because, as EA demonstrates so amply, in pursuing obvious premises, they end up in some very strange and unobvious places — that in our widening circle of concern, we should turn our hearts, minds and scarce resources to the gazillion beings in centuries to come that will be affected by human extinction (even if our understanding of the future world we’re trying to improve, is infinitesimal).