Against definitions
There’s a game you can play where you divide a group of people into two teams. One team has to offer a definition of an ordinary object, like a chair or a dog or something, and the other team has to find counterexamples, like a chair that doesn’t fit the definition or a non-chair that does fit the definition. (I didn’t say it was a fun game.) What you’ll find is that there’s always a counterexample. A chair isn’t just a wooden thing you sit on, for example, because a table is a wooden thing you can sit on, and not all chairs are wooden, etc. If you make the definition really long and super specific to rule out non-chairs that fit the definition, you’ll find that you’ve also ruled out many chairs; if you make the definition really general and vague to rule in all of the chairs, you’ll rule in many non-chairs. And no matter how you try to find a clean boundary, there will always be intermediate cases that make a mockery of your attempt to force reality to fall within the lines drawn by your words.
So how do we know what a chair is if we can’t define one? The answer is that it doesn’t matter: you don’t need to define a chair to sit in one.
We often experience words as if the words convey some clear and exact meaning. But in reality, words are just noises that reach your ears or squiggles that your eyes look at. If someone says, “You can sit in that chair over there,” you don’t need a definition, you just need to sit down.
That is to say, what words really do is help to activate concepts that prepare perceptions and actions. When someone says the word “chair” to you, that helps you look for things that you expect will be good for sitting down on—things which may or may not fit into some definition of “chair” that you never needed to think about while finding a seat. Words are sounds, in other words, and they play the role that any sound does: they’re sensory data you use to update your internal model and form better plans to navigate toward your goals.
A telling example is the word “murderer”. We could define a “murderer” as someone whose actions knowingly shorten the lives of other people without their consent, in which case someone who runs a donut shop is a murderer. (Donut shop customers include children and people unaware of the health effects of donuts, so they can’t consent.) We could try to add more qualifications to the definition of the word murderer in the hope of ruling out the donut maker, in which case we’d end up with a definition three pages long that still fails to rule out and rule in everything it wants to. Or we could realize that what we really do with the word “murderer” is activate concepts that prepare goal-oriented perceptions and actions. When you tell me someone is a murdere, I think I should run away and call the police, whereas I want to run to the donut maker and order a dozen donuts. I don’t need to come up with a definition of murderer that rules out the donut maker to see that the concepts the word “murderer” helps me activate apply very poorly to the donut maker.
Words are sounds. Our environment is full of sounds, none of which people expect to mean a definition. Every day you can hear things like dogs barking, thunder crashing, a car’s engine running, birds chirping, etc. The sound of thunder doesn’t carry a definition to your ears; it’s just noises that you use to better anticipate your environment. Words, I’m claiming, don’t have qualitatively different properties from other sounds: they don’t get to have definitions when other sounds don’t.
Scientists like to talk about ideas that carve nature at its joints. It’s intuitive to think of words as corresponding to joints, with one word for each joint. But reality is jointless, the universe does not divide itself into units of study for us, and words do not pick out natural kinds. They’re just sounds that you hear, or squiggles that you see, that help you make predictions because of how you’ve seen the word used in relation to other things in your environment. You learned that “apple” means “the kind of things you call an apple” because your parents would point at apples and say “apple” until you figured out how the word “apple” was likely going to be used. The meaning of the word is its relationships, not a definition invented post hoc to satisfy an imaginary obligation.
The work that words do has nothing to do with their definitions and everything to do with the listener—the effects words achieve are multicausal. Look at this guy try to follow instructions about how to make a peanut butter and jelly sandwich. When the listener isn’t doing their part, the words, even when generously “interpreted literally”, aren’t strong enough to achieve the outcome. Words are affordances that the listener exploits; the task of communication is arranging words relative to the listener such that when the listener does what they want, they’re likely to do what you want as well. (See principles 1 and 6.)
This is how interoceptive signals work in general. Genes don’t store developmental information, and neurons don’t store memories. Similarly, words don’t store their meanings. Instead, they act as pointers that a listener can use to find the concepts that you intended. Words mean what is preserved when they’re used, and they acquire new meanings that were not intended by the original speaker as they’re successfully used in new circumstances.
Definitions are dumb. I’ve seen too many interesting scientific conversations be derailed by someone asking, “But what’s your definition of life/intelligence/agency/whatever?” There are no definitions. We’re primates making noises in the hopes that those noises help other primates look in the direction we’re trying to point at—we say “banana” and hope that the primate we’re talking to looks at the banana tree. This process does not rely on a nonexistent definition of bananas! It is an imperfect, error-ridden, highly variable (there is a great deal of ruin in a language) and therefore transformative and highly useful process, allowing us to understand the complexities of creation via a language that evolved in order to tell one another where the ripe fruit is.


This is great. Definitions are dumb, and we’ve known this for a long time but now we really really know! And now we also have an alternative model. Language isn’t a system of definitions; it’s a functional engine for generation. LLMs (and our language) generate linguistic behavior. And in humans— they also generate BEHAVIOR behavior. Saying “chair” doesn't; pick out some natural kind or whatever: in the right context, it makes someone look for a place to sit ('grab a chair'), or not ('sorry, there are no chairs'). It doesn't pick out some natural kind or even point to some mental category. Now, how the generative process jumps from language to action still needs to be worked out but now we have a framework!