At 30:53 of this video, Mike Levin asks the following:
What about sort of flipping it and looking at it from the perspective of the patterns themselves? The patterns are the agents. I know that’s been done for example in memetics where people say it’s the information patterns that propagates, but […] most of that stuff assumes that the patterns themselves are dumb, they’re low agency things, and that they are sort of passive data that ends up being propagated by physical systems. […] The actual patterns might be agents in the sense that they actually might be high agency things that make decisions and have memories and so on.
Morality may be an example of high agency information patterns. Popular concepts of morality in science expects morality to do at least one of three things: achieve allostasis, promote reproduction, or enable group coordination. However, these hypotheses are all contradicted by evidence, as per virtuous violence theory: pride causes people to get into fights, which is bad for allostasis; shame causes fathers to kill their daughters, which is bad for reproduction; nationalistic ideas lead to war, which is bad for group coordination.
So why do we have morality if it doesn’t accomplish any evolutionary or individual goals? The answer is that morality is a psychological phenomenon produced by the mind, and the production of psychological phenomena isn’t preset! The genes don’t plan out the development of the organism. It is entirely possible and indeed probably likely to produce psychological phenomena that aren’t helpful for reproduction because this tendency can’t be selected out, only minimized in a given environment. The ideas have their own ideas about what they want to do.
Relationship regulation theory says that morality isn’t for reproduction or social cooperation. Morality is relationship regulation: creating, reinforcing, acting out, and terminating relationships. Note that terminating relationships is an important part of relationship regulation, as is preventing some relationships from forming: it’s not about propagation and survival per se, but about making the environment behave as expected.
In “Moral thoughts as moral thinkers”, I speculate that morality has a mind of its own, working toward goals that can’t be attributed to genetics or material self-interest.
The confusing thing about morality is that it sometimes does a good job of helping you to cooperate with people and sometimes it makes it harder to cooperate with people. Sometimes morality helps you control your social environment, and sometimes it encourages you to take on large metabolic costs that you could easily avoid if you weren’t influenced by morality. It’s as if morality has a mind of its own, goals that can’t simply be reduced to some biological need like cooperation, reproduction, or regulating metabolic costs.
Maybe that’s the right way to think about it. Just as morphogenesis has a mind of its own that can’t be reduced to the constituent cells or the protein-providing genes, perhaps morality has a mind of its own as well. According to relationship regulation theory, morality is about regulating relationships to fit within relational models. And active inference is about preserving internal models. Perhaps it doesn’t matter if these internal models have some evident biological utility: as models, they naturally seek to maintain and perhaps even extend their accuracy and rationality, which may or may not result in traditional biological utility. Perhaps morality is constructed at first because it’s metabolically useful, but none of the neurons that construct it care about that biological utility: they may favor the construction of morality for many reasons, resulting in tradeoffs between signals in the body, bargaining with each other over what to construct and when and why.
The fact that we often find it difficult to pursue reproductive and material interests in the face of moral obligations suggests that morality is a potent agent, able to preserve its own goals even in the face of opposing pressures. Morality isn’t exactly an alien outsider; it’s something constructed by internal signals. But the resulting group-level goals aren’t reducible to any of the individual goals that participate in the construction, including evolutionary goals. It’s almost as if the process of putting together an intelligence necessitates a transformation of goals as the abilities and cognitive light cone of the system scale up, where the resulting high-level agents are ideas, categories, or patterns of predictions, a very constructionist perspective.
What are your thoughts on agential beings like angels, demons, and spirits in classical traditions? Do they relate to our previous discussion? It's interesting that when these traditions conceive of the world of heavenly patterns, they often do so in terms of personal, agential beings. Why not just static Platonic forms or 'dead' patterns? However, I recently talked with JP Marceau of The Symbolic World and he suggested that one could think of 'the angel of E=mc²,' or of any mathematical construct in a similar way. We view an equation like E=mc² as a very set and well-defined law. However, we don't understand all of its implications. So, could a law like E=mc² also be an agent? This is analogous to how, at our level of existence, something strange and seemingly counterproductive, like certain aspects of morality, might be conceptualized as being governed by higher-level agents.
War as being harmful to group co-ordination is an interesting position to take, given its historical use as a rallying cause at the national and international level.