thinkingphilosophy Sequence Mind and Society

The Typical Mind Fallacy

Assuming other people's minds work more like yours than they actually do. One of the most consistent errors in human reasoning.

One of the more consistent errors in human reasoning is the typical mind fallacy: assuming that other people’s minds work more like yours than they actually do. You reason about how others will feel, what they will want, or how they will respond — but the model you’re using is yourself.

The fallacy runs in both directions. Assume too much similarity, and you consistently misread people. Assume too much difference — the atypical mind fallacy — and you treat your own perspective as uniquely valid, immune from the feedback others might offer. The right calibration is neither: other people’s minds are similar enough that general principles apply, and different enough that you should not assume your experience is representative.

Love languages as a translation problem

Love languages — stripped of the self-help packaging — are an interesting example of this problem applied to relationships.

Communication is the transfer of a mental object from one mind to another, using a symbol as the intermediary. The symbol might be words, behaviour, a gift, time, or physical contact. The problem is that symbols don’t carry fixed meaning. Different people decode different symbols more effectively. What reads as affection to one person may be invisible to another, not because the affection isn’t there, but because the symbol wasn’t legible.

The love languages framework is, at core, a translation guide: knowing that different people have preferred symbolic substrates — and that the same underlying message of care will land differently depending on how it’s encoded. This doesn’t require believing in any particular theory of personality. It just requires accepting that your preferred mode of expression is not universal.

The direct implication of the typical mind fallacy: if you have never thought much about this, it is probably because your natural way of expressing care happens to work on yourself. You give what you would want to receive. When it doesn’t land, you conclude the other person doesn’t value it — rather than that you’re sending in the wrong format.

Knowing versus applying

There is a large gap between knowing a bias and actually correcting for it in real situations. Real situations don’t announce themselves as examples of a named fallacy. The signature is subtle: a moment where you conclude the other person wouldn’t want X, without noticing that the reasoning was “because I wouldn’t.” A moment where you assume a shared preference, a shared sensitivity, a shared way of processing an event.

Developing sensitivity to these signatures is the actual work. Reading a description of the typical mind fallacy installs a concept. Catching it in yourself, in a specific moment, requires that the concept has become a genuine part of how you observe your own reasoning. These are different things.

The practical target: not to list cognitive biases, but to be able to notice when one is active in your own thinking and to have a prepared correction. “My reasoning is pulling in this direction — that’s the typical mind error — what does this person actually value, rather than what would I value?”


We are all imperfect in systematic, often predictable ways. The costs of those imperfections vary — most of the time they are small, and occasionally they are not. The ones worth working on are the ones within reach: errors you can actually detect if you are paying attention, and correct if you have thought about them in advance.

The costliest imperfection is the one you could have improved and didn’t.