Ethical egoism is harmful

by vivin

**Note**: *This is a rambling argument against ethical egoism; I’m saying that it doesn’t make sense to reject altruism, especially using evolution as an argument, and in fact, evolution can be used to explain how altruism can be evolved. I’m not talking about possible solutions or theories of socio-economic organization. Obviously, philosophy informs those ideologies and the rejection of ethical egoism has implications in that regard. But that’s another topic.*

I can’t believe that ethical egoism is even considered to be a valid moral-framework. It essentially legitimizes being a selfish asshole who has no regard for the feelings of others; a narcissistic psychopath would be the supreme ethical-egoist. This is absolute nonsense. It is argued that altruism doesn’t “make sense” because looking out for another person when it’s not even in your interest doesn’t jive with “natural selection” (something right libertarians are in _love_ with). That’s still generally true; evolution is ruthless competition — the amoral laws of nature or the “law of the jungle”. But there is also emergent complexity.

When you start dealing with agents that not just aware of themselves, but also _other_ agents like them, then they must necessarily be aware of the consequences of their actions, on such agents. This was a foregone conclusion, I think, once sexual reproduction evolved. At some point, a creature would evolve that needs to be aware of opposite sex, and needs to be able to maximize its chances of reproducing with that opposite sex. That requires an internal cognitive framework (or at least embedding so that we don’t have to quibble about sentience) that can represent an “other”. Naturally, this will give rise to cooperation, because even by chance two agents can discover that their chances of success at gathering resources (food; i.e., energy) is maximized if they work together. From here it’s not too difficult to see how packs, herds, and flocks evolved. Once the brain already has a sense of the opposite sex, it’s not hard to extend that to other members — I would argue that that would have necessarily evolved at the same time as awareness of the opposite sex because agents usually have to compete with members of their own sex for access to the opposite. Hence again, even by chance, it is possible for agents to discover that by working collectively, their chances for success are maximized. This is especially observed in pack animals from wolves, and in primates, especially in us.

If early humans only looked out for themselves, we would have gone extinct. This is because by himself or herself, a single human-being is not a formidable predator; we don’t have big, sharp, teeth or claws. We aren’t especially hardy either; we don’t have fur and we are comparatively frail when compared to other predators that occupy the same niche. This is true for many primates as well. But what maximized the chance of not just group, but _individual_ success and survival, is working together _as_ a group. To do that necessarily _requires_ altruism, since the agent _must_ be able to balance individual needs against the overall well-being of the group. For example, when a group is attacked, healthier individuals will protect the injured, old, and young — this puts them at more risk, but they do it regardless because group survival is only guaranteed by the protection of those who cannot protect themselves. With humans this reaches a different stage. No longer guided by the blind evolution, our sentience lets us explore the solution space of social-organization even further. Our sense of _self_ , our metacognition, lets us _question_ norms and wonder about _other_ social arrangements.

But that one can explore this solution space means that one will encounter both _bad_ and _good_ solutions. Ethical egoism is bad solution. That it has been conceived of, doesn’t give it any validity. Individuals purely acting in their own self-interest may lead to a functional society, but it is not one that is necessarily equal (it is highly improbable+ that it would be); where the rights of a percentage are not being continually oppressed — disregard of the rights or feelings of others necessarily leads to that. As human beings, our chances of survival are maximized by having concern _about_ our fellow humans. In these times, we’re talking about the survival of human civilization itself.

Ethical egoism as a moral framework should be rejected.

+*I have this intuition about a game-theoretic agent-based framework that could perhaps provide evidence for this. I would need to use an inequality measure of some sort and then run thousands of simulations to get a distribution of values for the coefficient, and the parameters (types of cost-functions, basically) that produce those values. I haven’t fully thought it through because I have other stuff to work on, but it’s an experiment I’d like to try one day.*