Does the FTX debacle hold lessons for moral philosophers? For those interested in public philosophy?

1. What’s the connection?

The head of FTX, billionaire Samuel Bankman-Fried, has been an active part of the “effective altruism” movement, which was inspired by Peter Singer‘s application of utilitarianism to problems of world hunger (especially the classic 1972 essay “Famine, Affluence, and Morality“) and, more lately, shaped by philosophers Toby Ord and William MacAskill (both at Oxford). Further, it seems MacAskill was an influence on the general direction of Bankman-Fried’s career. As Sigal Samuel puts it in an article for Vox:

When Bankman-Fried was in college, he had a meal that changed the course of his life. His lunch companion was Will MacAskill, the Scottish moral philosopher who’s the closest thing EA has to a leader. Bankman-Fried told MacAskill that he was interested in devoting his career to animal welfare. But MacAskill convinced him he could make a greater impact by pursuing a high-earning career and then donating huge gobs of money: “earning to give,” as EA calls it… So the young acolyte pursued a career in finance and, later, crypto. To all appearances, he remained a committed effective altruist, pouring funding into neglected causes like pandemic prevention. 

Since FTX tanked last week (see the Bloomberg columns here and here; also this blog post), philosophers have weighed in.

2. What are the lessons for Effective Altruism?

Among those who commented was MacAskill, who wrote in a series of tweets:

If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception. I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.

If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.

Some might argue that effective altruism provided moral cover for poor business practices in an environmentally unfriendly sector, a possibility MacAskill entertains here:

If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.

I initially thought charges like this are easy to make but hard to substantiate because they depend so much on mind-reading, but Bankman-Fried helpfully admitted that EA was “mostly a front” in an interview this week:

Kelsey Piper: So the ethics stuff- mostly a front? People will like you if you win and hate you if you lose and that’s how it all really works?

Sam Bankman-Fried: Yeah. I mean that’s not *all* of it but it’s a lot…

Kelsey Piper: You were really good at talking about ethics, for someone who kind of saw it all as a game with winners and losers

Sam-Bankman-Fried: Ya. Hehe. I had to be. It’s what reputations are made of, to some extent…

Additionally, to the extent that not “all” of the ethics was a front, some might worry that the connection to EA contributed to FTX making inadvisable or unethical moves via “moral licensing.”

So, one of the lessons for philosophy’s advocates of EA, who appear very sincere in their commitment to doing good, might be to be more cynical about people. Other lessons?

3. What are the lessons for utilitarianism? For moral philosophy? For public philosophy?

One of things MacAskill says is that the kind of behavior FTX executives engaged in goes against the tenets of EA, presumably even when such behavior would help achieve EA’s end goals:

For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning. 

One might be curious how a movement whose origins are in utilitarianism can accurately be said to emphasize “common-sense moral constraints” and “strongly oppose ‘ends justify the means’ reasoning.” But since Bentham, utilitarian moral thinking has often been preoccupied with minimizing the apparent gap between its prescriptions and those of common-sense morality, and since Mill we’ve seen increasingly sophisticated versions of utilitarian (and more broadly, consequentialist) moral theories that try to do just that. Furthermore, even if effective altruism was inspired by Singer’s utilitarianism, there’s nothing incoherent about a non-consequentialist approach to it that places constraints on how one may permissibly benefit others.

MacAskill provides some passages from his recent book in which he appears to take seriously such constraints. However, I can’t tell if he or other leaders of EA are on board with them. For example, he says “plausibly, it’s wrong to do harm even when doing so will bring about the best outcome.” Yet anyone as smart as MacAskill (and as familiar as he is with contemporary moral philosophy) could come up with a variety of compelling counterexamples to “it’s wrong to do harm even when doing so will bring about the best outcome” in a heartbeat. So I doubt he believes it—at least not in that simple form. And so, despite EA’s “in principle” compatibility with deontological constraints, when we are talking about EA in practice, I think we are talking about something utilitarian-like, and we shouldn’t be surprised when that practice ends up deviating from common-sense morality or endorsing “ends justify the means” reasoning. (For a defense of the idea that at the level of “practice”, utilitarianism, on instrumental grounds, coherently endorses familiar deontological constraints, see this recent post by Richard Chappell.)

That said, “fraud, illiquidity, and sloppy bookkeeping in the name of maximizing utility” does not appear to be the explanation for FTX’s demise. So what could it have to do with moral philosophy? Why judge a moral theory by the behavior of people who fail to live up to it?

But what if that moral theory is too often associated with immorality in practice? That’s a question Eric Schliesser (Amsterdam) raises about utilitarianism:

Within utilitarianism there is a curious, organic forgetting built into the way it’s practiced, especially by the leading lights who shape it as an intellectual movement within philosophy (and economics, of course), and as a social movement. And this is remarkable because utilitarianism for all its nobility and good effects has been involved in significant moral and political disasters involving not just, say, coercive negative eugenics and—while Bentham rejected this—imperialism (based on civilizational superiority commitments in Mill and others), but a whole range of bread and butter social debacles that are the effect of once popular economics or well-meaning government policy gone awry. But in so far as autopsies are done by insiders they never question that it is something about the character of utilitarian thought, when applied outside the study, that may be the cause of the trouble (it’s always misguided practitioners, the circulation of false beliefs, the wrong sort of utilitarianism, etc.). 

In my view there is no serious study within the utilitarian mainstream that takes the inductive risk of itself seriously and—and this is the key part—has figured out how to make it endogenous to the practice. This is actually peculiar because tracking inductive risk just is tracking consequences and (if you wish) utils. 

Schliesser says that there’s “something about the character of utilitarian thought, when applied outside the study” that’s problematic. What is that something?

Despite it being over two centuries since Bentham espoused utilitarianism (not to mention over two millenia since Mozi), and—especially—despite the storm of developments in consequentialist thinking over the past 50 years, the version of consequentialism that tends to get carried “outside the study” is really, well, simplistic. It is not, say, the sophisticated consequentialism of Railton, nor the rights-respecting consequentialism of Pettit, nor the rule-consequentialism of Hooker, nor the responsibility-constrained consequentialism of Mason, nor the moderate aggregationism of Voorhoeve, nor the commonsense consequentialism of Portmore, etc., etc., etc. Those are too complicated or too subtle or too informed by engagement with the problems of moral philosophy or moral life, or too challenging to readily explain and apply. Instead the version of utilitarianism that tends to get taken on board “outside the study” is over-the-counter basic utilitarianism, the common side effects of which are bullet-biting, shoulder-shrugging, hand-waving, wishful thinking, and an increased temptation to push people off of footbridges. Perhaps there are concerns about “the character of utilitarian thought” because it’s this basic version that people usually have in mind.

Yet, the utilitarians might reply, isn’t it normally the case that it’s the unsophisticated versions of moral philosophy that tend to get popular attention and application, and wouldn’t it be reasonable to ask how that has turned out for other moral theories? It’s not as if, say, divine command theory fares less disastrously “outside the study.” And isn’t the closest thing to a popular Kantian movement… Ayn Rand? (Oh the irony.)  Should we expect the Kantians to give up Kant because some people think taxation for public schools violates autonomy? Should we expect divine command theorists to give up God because some people thought they were sent by him on a crusade to murder non-believers? So maybe the fact that only a crude version of utilitarianism gets taken up outside the study isn’t a reason to be especially concerned about utilitarianism.

But maybe it’s a reason to be concerned more generally about moral philosophy? Or about how it’s taught? Or about how it is packaged for the public?

Discussion welcome.

Originally appeared on Daily Nous Read More

Post Views: 2

Justin Weinberg

philosophynews.com

Read Source