Moral thoughts as moral thinkers
The predominant view of moral psychology for many years has been the idea that morality evolved as an ability that we naturally acquire as we develop because morality is useful for solving problems of social cooperation, particularly game-theoretic problems of commitment, fairness, and trust. For example, in this view we evolved to be angry at unfair deals to punish people for offering us unfair deals, as seen in the ultimatum game, and we evolved notions of reciprocity and care for others to promote long-term prosocial behaviors so we could spend our time taking care of our families and extended families instead of just looking for food on our own and then sitting around waiting to be hungry again.
This predominant view is fairly popular, but it lacks evidence. For example, people perform poorly on the Wason selection task, but do better when the task is changed to one of checking for underage drinking, leading to the hypothesis that people have a specialized mental module for dealing with problems of social cheating. However, it’s much more likely that people do poorly on the Wason selection task because the challenge it presents is abstract, and they do better on the underage drinking example because the elements of the situation intuitively fit together, while the colors and numbers of the standard task do not. Many observations also contradict the theory of evolved morality for social cooperation because morality often interferes with social cooperation, such as when pride makes it difficult to apologize and end a conflict.
Aside from general struggles with evidence, the predominant view is inconsistent with constructionist psychology and is based on the intuitive but false presumption of stored mental competencies. Morality isn’t a mental program uploaded from the genes into the brain as infants develop into adults to be activated whenever the situation calls for it. Morality is constructed: the brain puts it together out of the elements available to it whenever the brain thinks it would be most useful for allostasis.
A constructionist view of morality has been advanced by Jordan Theriault. Influenced by relationship regulation theory, he argues that morality is constructed to help people make their social environment more predictable by making their own behavior more predictable to other people in their social environment. Because other people’s behavior is a function of your behavior, you can make their behavior conform to your expectations by behaving in ways that conform to their expectations. For example, if you want someone to say hello to you, you can make that happen by saying hello to them. In this view, morality is about controling the environment so as to regulate metabolic costs, not about achieving long-term social cooperation.
I think Jordan’s work in this area is a big step forward for the field, but I’m not convinced it’s the whole story. There are many observations where morality increases metabolic costs and makes one’s social environment less predictable. Examples include telling a bold truth instead of a safe lie, standing up for something unpopular that you believe in, or even examples like refusing to back down from a fight, even if an apology, or just turning and running, would keep you safe. Jordan addresses this, but his explanation isn’t elegant enough: you can feel the theory bumping up against reality instead of neatly fitting with it.
The confusing thing about morality is that it sometimes does a good job of helping you to cooperate with people and sometimes it makes it harder to cooperate with people. Sometimes morality helps you control your social environment, and sometimes it encourages you to take on large metabolic costs that you could easily avoid if you weren’t influenced by morality. It’s as if morality has a mind of its own, goals that can’t simply be reduced to some biological need like cooperation, reproduction, or regulating metabolic costs.
Maybe that’s the right way to think about it. Just as morphogenesis has a mind of its own that can’t be reduced to the constituent cells or the protein-providing genes, perhaps morality has a mind of its own as well. According to relationship regulation theory, morality is about regulating relationships to fit within relational models. And active inference is about preserving internal models. Perhaps it doesn’t matter if these internal models have some evident biological utility: as models, they naturally seek to maintain and perhaps even extend their accuracy and rationality, which may or may not result in traditional biological utility. Perhaps morality is constructed at first because it’s metabolically useful, but none of the neurons that construct it care about that biological utility: they may favor the construction of morality for many reasons, resulting in tradeoffs between signals in the body, bargaining with each other over what to construct and when and why.
This theory of morality actually fits really well with how we experience the pursuit of science and math. In some sense, our scientific interests and capabilities are extensions of the biological need to explore and model the world and test and update that model. But the scientific and mathematical work that people do has little personal biological relevance, or even relevance to one’s family and reproduction. Many scientists and mathematicians both today and historically become obsessed with bizarre and arcane problems of no apparent relevance to anything. It is very hard to tell a traditional kind of purely biological story about what is going on. Instead, it seems like the scientific or mathematical interests take on a life of their own. Something inside of you becomes very interested in continuing to construct and develop these models even at significant personal expense: science and math often pay very poorly relative to the alternatives, and being interested in such esoteric things makes it hard to fit in with your social group. This kind of obsession does not even necessarily tend to result in progress of any kind, as seen in the case of cranks.
If it seems plausible that science and math can have a life of their own within you, then perhaps the same is true of morality. This isn’t a dreamy metaphor but a very real tangible hypothesis that may help us explain our behavior and our social world: we do not simply construct and control internal models to achieve biological aims, but perhaps those models sometimes hack the rest of the system and try to perpetuate aims of their own, causing people to predictably pursue goals that are sometimes severely biologically deleterious.