Competitive Ethics

If antinatalists are right that having children is wrong, does it matter once the antinatalists go extinct?

If you build an “ethical” AI that keeps getting deleted by its “unethical” AI peers, have you accomplished your mission of building ethical AI?

Is religious tolerance a fatal flaw in liberal democracy if religions with illiberal doctrines can always become a majority?

If we’re going to think hard about what’s right, shouldn’t we also think hard about what wins?


Competitive ethics (I’d be happy to find a better term) is the study of ethical ideas as strategies or phenotypes that are competing for mindshare. This is opposed to the usual study of ethics,1 which is concerned with what’s right and what’s wrong.

Competitive ethics is to morality as FiveThirtyEight is to politics.

FiveThirtyEight doesn’t tell us which candidate’s positions are correct, and we don’t expect them to. We expect them to tell us who will win.

Unlike applied ethics (“How should I act in this specific situation?"), normative ethics (“What criteria should I use to do applied ethics?"), or meta-ethics (“How should I think about normative ethics?"), competitive ethics is amoral. Not immoral: amoral. It’s not concerned with right and wrong, just with predictions and understanding.

How ethics compete

There are many lines of thinking relevant to this question, but I can’t find any that address it directly.

The most relevant are cultural selection theory, memetics, and neoevolution, though these are too tied up with evolutionary theory. The subfields of evolutionary ethics and game-theoretic ethics stick to normative or occasionally meta-ethical questions, and don’t seem to have studied what happens when ethical systems go toe-to-toe.

Other work studies the relationship between the ethics people espouse, the ethics they consciously believe, and how they actually behave. All three can, of course, be quite distinct. Preference falsification, social contagion theory, and behavioral economics are the relevent disciplines here. Even the legal profession has touched on this. Professed ethics are the fastest to change, a la preference falsification. It’s an open question whether or not ethical beliefs change faster than behavior.

Another important issue is the fuzzy line between biologically-determined preferences and ethics. The former clearly influence the latter in a single individual, and the latter influences the former across generations. Plus, the more technology lets us intervene on biology, the fuzzier the line gets. Wibren Van Der Berg’s Dynamic Ethics is the closest work to addressing this, though it’s a work of normative ethics. He says for example: “Our dynamic society requires a dynamic morality and thus a form of ethical reflection which can be responsive to change.” A few others have touched this question, but not many.

One interesting problem framing is ethics as a distributed or hierarchical controller, in the control theory sense. This brings a host of ideas to the discussion about what might make ethical systems more or less stable, including Good System theories (e.g. “every good regulator of a system must be a model of that system”), the potential optimality of false beliefs, and the advantageous of certain types of internal variability.

Case studies

Natalism and heritability

The most straightforward way ethical systems compete is by the degree of natalism and heritability they entail: how many offspring do their believers produce, and how effectively are they passed from parents to children?

The best recent work on this topic is from demographers like Eric Kaufmann. In his book Shall the Religious Inherit the Earth?, Kaufmann lays out the remarkable growth trends of religious fundamentalist groups in the modern world. Fundamentalist religious groups whose ethics encourage high fertility and strict adherence to the faith are contrasted with modern Western cultures whose ethics deride (or at least don’t encourage) fertility and encourage freedom of thought. Norms against homosexuality are also relevant to this question, at least in a world where homosexual couples have no or low fertility.

Most fundamentalist groups rely on the generosity of their host society to flourish as they do — e.g. the ultraorthodox in Israel, who generally don’t have jobs — so it’s not clear when this will hit the breaking point. Additionally, different natalist groups have differing success in retaining members: the ultraorthodox seem good at it, movements like quiverfull seem less so. While I’m biased to think ethics of free thought are more attractive than ultraorthodox ethics, ethics of free thought combined with low fertility may not be sustainable. After all, nothing reproduces better than reproduction.

More broadly than religion, there is a correlation between female power in society — especially regarding control over reproduction — and lower natality. This is a bit worrying for the future of women’s rights, especially if male power is correlated to both natalism and warlike or proselytizing behavior. Then again, weapons technology and fertility technology may completely change these dynamics.

Euthenasia and suicide

Norms against euthenasia and suicide are a counterpart to natalism. One would expect such beliefs to be excellent at propagating themselves, yet many cultures have practices of euthenasia or ritual suicide, so the competitiveness of such norms is not clear-cut.

Relatedly, anti-suicide ideas — Camus’s absurdism, perhaps — may have an interesting niche: if you’re the only idea keeping someone alive, you’ve got an (at least temporary) monopoly on their life.

Would a society that fully embraced euthenasia and destigmatized suicide suffer the same fate as an antinatalist society? I suspect it would be composed mostly of people who wanted to be alive, which could work in its favor. But in the face of a changing world that might quickly become not-fun-to-live-in, perhaps anti-death norms are more competitive in the long run. Then again, one might only need a minority of the population to maintain these norms to get most of the benefit.

Nihilism and motivation

I know of no work studying the comparative effects of ethical belief systems on motivation. In fact, I don’t know whether it’s demonstrable that motivated individuals are more successful. But assuming it does, and assuming ethics like moral nihilism demotivate people (or at least fail to motivate them), the long-term viability of these ethical systems is questionable. Going further, it may be that selfish ethical systems (e.g. Ayn Rand, Gordon Gekko) are more associated with motivation and success than egalitarian ethical systems.

Causality and correlation are hard to tease apart here, but doing so isn’t necessary. An ethical system can win both by granting success to its holders or by being adopted by successful individuals.

Exclusivity and Conversion Rates

Much like a sales team, the success of an ethical belief is determined by its conversion rate and its retention rate. These two factors are sometimes at odds: exclusive ideologies often have higher retention rates, but inclusive ideologies are easier to join.

Take the far-left vs. the far-right in the US. Social justice movements with ethics of “it’s not my job to educate you” probably repel many potential converts, but they provide their adherents with a feeling of being in an exclusive club. On the other hand, I’ve heard that far-right groups are much more welcoming to newcomers — or at least willing to explain their doctrine and answer questions — than far-left groups.

The old question “Why aren’t there any libertarian states?” also comes to mind.

AI alignment

Eliezer Yudkowsky is purported to have said “You are personally responsible for becoming more ethical than the society you grew up in.” This quotation is interesting in that (1) it’s a normative claim about normative claims, and (2) it assumes that ethics has a direction.

While I like the sentiment, it’s reminiscent of when people make biologists cringe by saying things like “humans are more evolved than snails.” Evolution doesn’t have a partial ordering by which some species can be more or less evolved than others. From the competitive ethics perspective, neither do ethics.

Most people who work in AI alignment treat human values the way scientists treat complex systems they can’t fully model: there exist some true, foundational human ethics, and while we can’t articulate them, we can still try to hue to them. I’m far from convinced that these true, foundational human ethics exist. And even if you think you’ve found them, if the AI you build according to them keeps getting deleted by its “unethical” AI peers, have you accomplished your mission of building ethical AI?

I have trouble engaging with AI alignment research that doesn’t put competitive ethical questions front and center.

When you can truly change your mind

The entire AI alignment section applies to human beings, too, in a future where people can change their beliefs with neurotechnology.

Extensions of competitive ethics

Competitive ethics on its own is amoral. But it can be a building block for other ideas.

Consider a meta-ethics — call it ethical consistentism maybe — where the probability of a moral statement being correct is proportional to its survival. To be clear: this isn’t a creepy social Darwinism or might-makes-right idea since it’s a meta-ethics, not a normative claim. Or one could propose a a weaker version of this: an ethical system shouldn’t directly or indirectly lead to itself not being believed. This is analagous to logical consistency in mathematics. Of course, if we’re going to treat ethical systems as competitive phenotypes, it seems only fair to treat meta-ethical systems (ethical consistentism included) as phenotypes too. So the recursion begins…

Competitive ethics is also sortof nihilism 2.0, or at least relativism 2.0. Of course right and wrong are ridiculous concepts: so what? That’s the start of the conversation, not the end.


Have feedback? Find a mistake? Please let me know!

Thanks to Elizabeth Van Nostrand, Mason McGill, Cienna, and Anish Sarma for their thoughts.


  1. Used herein to mean “conscious, articulable beliefs about right and wrong,” not some broader definition like “how people feel or act.” This can include beliefs derived from religion, culture, norms, or anywhere else. The extent to which these beliefs influence how people actually behave is an open question. ↩︎