Local Utilitarianism

(I’m ripping off Sasha Chapin and writing a “post” every “day” for 30 days. Kindly adjust quality expectations.)

Do you like choosing where to donate based on randomized controlled trials, but smell fish every time someone brings up utility-maximizing computronium Dyson spheres?

If so, local utilitarianism is for you.

Local utilitarians use utilitarian reasoning to make decisions within the scope of current human concerns, but they refuse to make any claims about morality beyond that scope.1 They choose to save 10 lives rather than 1 for the same amount of money, but they don’t make decisions to maximize the expected well-being of whatever species humanity will have evolved into in 1 million years. They worry about whether Siri can suffer and how we’d know if it did, but they don’t think it’s possible to reason about how good or bad a post-Singularity world might be.


Local Utilitarianism


(Graphic gratefully adapted from Prof. Keenan Crane.)

Local utilitarianism isn’t moral cluelessness, the concern over not being able to predict the results of our actions. It’s closer to utilitarianism with moral uncertainty that scales the farther we get from the current state of affairs.

If you’re concerned that local utilitarianism isn’t totalizing, don’t be, even though most moral systems are. We’re comfortable using Newtonian mechanics to describe the physics of our everyday world despite knowing it’s wrong when things get really big, really fast, or really far away. Sure, ethics isn’t science, and you can’t port all scientific intiutions to it. But utilitarianism tries hard enough to be scientific that such locality considerations make sense.

As the above graphic suggests, there might be other types of utilitarianism (or more well-adapted ethical frameworks) that apply in wildly different situations than our own. But you can’t translate between them in any obvious way - they’re off the map of the version of utilitarianism we can comprehend and use today.

What counts as “local”?

It’s clearer when Newtonian mechanics stops working than when our ethical considerations are outside the “scope of current human concerns”. Local utilitarians are happy to change their scope as we learn more about the universe. Also, local utilitarians aren’t against thinking about “weird” considerations like intelligence explosions or wild animal suffering, provided we can reasonably apply our current intuitions or understanding. But we can safely label a few things out-of-scope, at least for now.

One family of situations that local utilitarianism politely declines to consider are any for which general relativity come into play. Time dilation, the fate of beings inside black holes, the lack of a clear notion of “the future”, observer-dependence: these aren’t handled by any utilitarian analysis I’m aware of, despite them all being things humanity’s galaxy-colonizing descendents will have to contend with in the long-term future. (Should we move earth near a black hole so we can stretch out the fun even longer? “Not my department,” says the local utilitarian.)

Situations involving minds extremely different to ours are also out-of-scope. The line is trickier to draw here: we have a lot to learn that’s relevant to animal suffering or particular AI systems we might build in the near future. But the space of possible minds – considered as generally as possible – is vast, and by the time we’re talking about panpsychism or AIs that are to humans as we are to ants, the local utilitarian is happy to admit we have no idea what we’re talking about, and probably never will.

Another family of situations are those in which humanity goes extinct, which leads to the next question.

How’s local utilitarianism different from longtermism?

The practical conclusions of local utilitarianism and longtermism are mostly identical. Local utilitarians just have better reasons for them.

Longtermists acknowledge that the long-term future is going to be so different from the present that we can’t really say anything about it. But they still think they know enough about it to know we want it to be “big” in some way, and especially that we don’t want humanity to go extinct before we get there. This opens longtermism up to a broadside of speculative philosophizing against which it can’t really defend itself. What if humanity needs to go extinct to clear the way for a better descendent? What if really it’s better for there to only be 1 billion humans? Nevertheless, mitigating existential risks remains a top longtermist priority.

Local utilitarians refuse to make claims about such drastically different long-term futures. But they still care deeply about mitigating existential risks for the simple reason that most routes to extinction would be quite unenjoyable, thanks very much! Nuclear war: unenjoyable. Super-ebola: unenjoyable.

There is a difference in how longtermists and local utilitarians think about extinction, though. Most longtermists would consider it bad if one evening the earth instantly disappeared with a “poof” at the caprice of an almighty being or alien species. Local utilitarians don’t.2 They’re anti-extinction, but not anti-being-extinct, since a universe without humans is definitely outside the scope of usual human concerns.

How’s local utilitarianism different from neartermism?

Local utilitarianism doesn’t have a discount rate. If humanity enters a 10,000 year stagnation under a stultifying authoritarian regime, the same utilitarianism applies the whole time. The narrow scope in which utilitarianism applies for a local utilitarian isn’t delimited by time, but by fidelity to our current world.


Have feedback? Find a mistake? Please let me know!


  1. This idea must already must have a name in philosophy. There aren’t any new ideas in philosophy. I’ll gladly change the name if someone points it out to me. The only prior art I’ve been able to find is this. ↩︎

  2. This is true even of local preference utilitarians, since the personal interest a person can have in the universe being devoid of humans is - by the axioms of local utilitarianism - undefined. ↩︎