In sciences such as physics and biology, agency is a weird and mysterious thing that scientists can only dream of one day understanding. In economics, agency is the starting point. Questions like “What is an agent/what is agency?” and “How do we know if something is an agent by observing it?” are answered by economics.
Because agency is foundational in economic analysis, agency is rendered simple in economics so that the resulting analysis is tractable. Essentially, an economic agent is defined by two properties: their preferences, and their rationality.
Preferences refer to the elements of the agent’s internal structure that bias the evolution of their values in interaction with the environment. In this embodied view of economics, preference orders are not abstract considerations regarding a list of postulated options. Instead, there are real and nontrivial physical processes required to construct, order, and select alternatives. Preferences are the internal elements, particularly the parameters that pertain to the transformation of internal structure or internal systems of relationships, that determine, in a given interaction with the environment, and given internal resources, i.e., an internal endowment or budget, which options are constructed, how they are ordered, and which are selected to be facilitated while other options are inhibited.
Here is a broad depiction of how this could work. Suppose there is a hierarchy of neurons from “lower-level” neurons that are small and not highly interconnected with other neurons to “higher-level” neurons that are large and densely interconnected with other neurons. Since the higher-level neurons are primarily connected to other neurons, their expectations are largely determined by their relationships with other neurons rather than external factors. Lower-level neurons, being small and less connected, have very little ability to shield themselves from the unexpected effects of the environment, such as unexpected sensory signals, and they have few other commitments (e.g., commitments to other neurons) but to become accurate models of those sensory signals. When these neurons adapt their expectations and behavior to the new sensory signals, their new plans will conflict with the plans of other neurons that they interact with. To prevent conflict, newly adapted neurons send signals to other neurons that incentivize them to adjust their behavior to accommodate the behavior of the former. These signals propagate throughout the brain, adjusting neurons to each other in light of sensory surprises, all the way up to the highest-level neurons in the cerebral cortex. Because these neurons are large, they can incorporate a great deal of information, and because they are connected to many other neurons, they cannot achieve allostasis unless they produce an accurate model of the behavior of many other neurons from all over the brain. This model consequently aggregates sensory information from all over the brain into a multimodal summary of sensory expectations. All of the same equally applies to unexpected interoceptive signals. An artificial but simplistic and therefore helpful division is to think of sensory signals as providing information about goal states, while interoceptive signals provide information the internal budget available to pursue those goal states. Therefore, the multimodal summaries of sensory and interoceptive signals describe the goals and budget of the economic agent.
To achieve the agent’s goals given the agent’s budget, the high-level neurons must bring the rest of the brain into a pattern predicted by the multimodal summary, though the high-level neurons are only interested in their own allostasis. They do so by sending out signals that they predict will cause the other neurons to behave as desired. These signals are not commands that order other neurons what to do. Instead, they are incentives that function to bias the energy landscape for the brain, which in turn biases the behavior of the other neurons.
These other neurons are constantly forming assemblies, or particular systems of relationships with each other, to provide for their allostatic needs. Because of the way these neurons are connected to the rest of the body, these assemblies produce various motor behaviors and perceptions in various contexts, in a degenerate manner. The assemblies the neurons construct with each other are ones that they anticipate will be robust against the interoceptive and sensory signals resulting from their behavior. Because their expectations and interests are influenced by their energy landscape, they form different assemblies and produce different motor behaviors and perceptions based on the signals sent by the high-level densely interconnected neurons.
Because the brain is large relative to the size of an individual neuron and faces many competing constraints and goal states, the neurons do not form a single giant assembly but instead many smaller competing assemblies, which in effect constitute competing motor behaviors and perceptions that conflict with each other in the same sense that competing economic plans conflict with each other under scarcity. Neuronal assemblies compete with each other by passing signals along the brain that in effect seek to convince the rest of the brain to join, agree with, or stop conflicting with their plan. The winner is influenced by the state of the energy landscape of the brain, with the neuronal assembly best adapted to that landscape able to facilitate its plan across the brain while the plans of other assemblies are inhibited. Because there are factors determining the results of competition, the competing plans are able to be ordered into a linear or total (pre)order, which in this context is called a preference order. Therefore, when the high-level neurons warp the energy landscape of the brain, they bias the determination of which neural plans are preferred, i.e., which are facilitated and which are inhibited. Neural plans that are facilitated are “selected”, i.e,. chosen, while plans that are inhibited are not selected. The result of the selection of a plan is the entire brain being brought into agreement, sharing and implementing a single self-consistent plan that takes effect as motor behaviors and perceptions.
This control process relies entirely on the agency of the parts—the ability of neurons to form plans and to assemble with each other to scale their plans, the ability of neural signals to warp the energy landscape of the brain, etc.—but it is also constrained by the agency of the parts. Some limitations of control are: the effects of any neural signal or population of neural signals on the energy landscape of the brain is not perfectly predictable, the effects of a warped energy landscape on the behavior of neurons or of neuronal assemblies are not perfectly predictable, and the effects of neural assemblies on motor behaviors and perceptions are not perfectly predictable. In practice, neurons have to constantly pass signals back and forth so as to constantly mutually adjust to each other based as surprises occur.
Just as neurons are constantly acting, constructing patterns with each other, so too is the body always acting in the sense of being an object in the world transforming under the laws of physics as it interacts with other objects, and as interactions go on inside of it. The activity generated by neural assemblies biases the behavior of the body, making it more likely that otherwise unlikely outcomes occur, resulting in processes that we recognize in ordinary terms as actions or choices. Broadly speaking, therefore, choice is the selection from among competing patterns or plans, a selection that constitutes the facilitation of some plans while inhibiting others. In particular, the selection/facilitation of plans is presumably achieved by a competitive process that brings the plans into a preference ordering with each other, with the highest-ranked member of the competition, i.e., the most preferred plan, being selected.
Preferences are those factors which, in a given instance, bias the construction, ordering, and selection/facilitation of patterns, which yield processes that we call actions. Those factors are most obviously the parameters of the internal structure that determine the efficient transformation of the structure in a given interaction. It is appropriate to call these factors preferences when competition between potential actions or perceptions at the level of construction and at the level of selection results in a preference ordering among those potential actions or perceptions.
Speed-of-light delays, as well as probably many other constraints, guarantee that the brain will never be perfectly united with itself. Nevertheless, as long as neurons are communicating consistently and honestly with each other, a good deal of coherence is undoubtedly achieved. Of course, the transformations each neuron undergoes to form mutually compatible plans with every other neuron are themselves determined by the neuron’s preferences, and so on. Any potential “all the way down” issue is prevented by relational realism, where nothing needs to be made of anything; instead, properties come from relationships, or are relational. I.e., there is really only one level, no “lower” or “higher”, but humans sometimes find it useful to think otherwise.
The other element of agency in economics is rationality. Rationality can be understood a few different ways, but the most tangible way is rationality as a competency assumption. Rationality as competency means that rational agents can produce efficient solutions to novel problems. For example, a rational economic agent can bargain with a novel producer of a novel pollutant to reach an efficient outcome.
When faced with a novel problem, the economic agent might like to stay unchanged while altering the environment to solve the problem. However, this is impossible. Instead, the only thing the economic agent can do is alter their own internal structure so that the interaction with the environment that constituted the problem is now in some way resolved or optimized. This internal transformation is determined by preferences, but internal transformations are not necessarily efficient or structure-preserving. For example, suppose a meteor hits a person and crushes them. This is internal transformation, but not what we would consider a solution to the problem (though from another perspective, it is; it’s all just economics, albeit different possibilities competing with each other under scarcity, which by definition forbids all possibilities from coexisting).
To constitute a solution from a perspective, the solution should be efficient and structure-preserving. Efficient solutions are ones that are neither missing anything nor have anything extra, i.e., there are no externalities. Structure-preserving solutions are ones that constitute a mapping of internal structure onto the environment such that (some aspects of) internal structure are preserved, though the only means by which the agent can map their internal structure onto the environment is by transforming their internal structure. Structure-preservation constitutes a solution to a problem in the sense that problems are problems to the degree that they threaten internal structure, e.g., a predator that eats a prey is a problem to the prey because it destroys the prey’s internal structure by biting and digesting it.
All interactions imply tradeoffs. When an external problem interacts with internal structure, the result is that both are transformed. Solutions to the problem constitute (efficient) transformations to internal structure such that the internal tradeoff implied by the interaction is optimized. Not all structure is ever preserved, or else the interaction is not really “interactive”; no observation of the problem is made. But the agent has a choice about what structure is lost and what structure is preserved or gained, a choice to the degree that they can construct alternative internal transformations that imply different tradeoffs and then order and choose the best tradeoff according to their preferences. A rational agent is one that can assemble their options into a preference order determined by tradeoffs among internal structure, i.e., by preferences in conjunction with internal resources and the environment, and select the most-preferred option.
Because choice occurs at multiple levels in an agent—in both the level of selecting among given alternatives as well as the level of constructing those alternatives among other potential alternatives—there is rationality at multiple levels. This occasionally leads to methodological confusion in the study of rationality in agents because sometimes rationality is studied in the context of which alternatives are constructed and sometimes in the context of which alternatives are selected. Rationality might usefully be thought of as a two-dimensional plane that varies along both the dimension of the competency at constructing the best alternatives given internal resources and the dimension of competency at selecting the best of those constructed alternatives, a task complicated by ongoing internal and external transformations, i.e., neither the problem nor available internal resources are ever exactly fixed. In this view, rationality is a question of what solutions the agent can come up with to address the problem, and how it chooses among those solutions to implement one in particular. Note that possibilities such as a fusion or merging of solutions in addition to one defeating the other exist, i.e., the solutions could be thought of as economic agents themselves that bargain and trade with each other.
With these two elements, preferences and rationality, we have an economic agent. If we drop the useful but unnecessary term “economic”, then we just have an agent. Or to the extent that economics is not a limiting factor in the analysis, then we just have an agent. I.e., if economics can model its behavior as an economic agent, then it is an agent. Broadly speaking, an agent, according to economics, is a system that maps its internal structure onto the environment by undergoing transformation (tradeoffs) in an efficient, structure-preserving way. The resulting maps from the internal structure to the environment are properly called choices when the internal structure constitutes a preference order over a set of constructed options. Thus, when studying the agency of a system, instead of asking big questions about agency, we can ask small questions about economics, which is much easier and more productive.
Q: What is an agent?
A: An agent is a system that maps its internal structure onto the environment in an efficient, structure-preserving way.
Q: How do we know if a system is an agent by observing it?
A: We know that a system is an agent to the extent that we find it useful to model its behavior as it mapping its internal structure onto the environment in an efficient, structure-preserving way.
When I talked with a personality researcher Colin Deyoung he said he thought of economic preferences as also personality traits. Like mini personality traits if it's say for a very specific preference (ice cream over cookies) to larger personality traits like risk or time preferences. Do you think this is a helpful way to think about preferences? They don't just sit there and exist to be drawn when needed, they are flexible when needed but with some stability.