thinking Sequence Thinking Clearly

Probability

Expected utility tells you what happens in the long run, not what happens next. Most people have not fully internalized this.

The expected utility of a bet is its probability-weighted payoff. If the probability is low but the payoff is high enough, the bet is worth taking. If the probability is high but the payoff is low enough, it isn’t. The calculation is straightforward.

What it does not tell you is what will happen next. Expected utility is a statement about the long run — about what happens on average over many instances. Any individual instance is highly volatile. A positive expected utility does not guarantee a positive outcome this time. This is not a flaw in the framework; it is the correct description of how probability works, and most people have not fully internalized it.

Direction over magnitude

Here is a less obvious result. Suppose you know a coin is biased — but you do not know in which direction, or by how much. You are about to bet on a series of flips.

Counterintuitively, the direction of the bias is all that matters. The magnitude is a distraction. Once you know which way the coin is weighted, you always bet with the bias — and additional precision about how strong the bias is actually produces inferior outcomes by introducing noise into your decision-making. Some questions shouldn’t be asked, because their answers lead to worse actions.

Knowing that a bias exists, without knowing its direction, is neutral — or worse than neutral. If you act on a guess about direction that turns out to be wrong, you are in a worse position than if you had continued guessing randomly. The information helps you only if it points the right way. This is not an argument against seeking information; it is an argument that the right information is highly specific, and more information is not always better for limited agents working under uncertainty.

Shaping odds versus calculating them

There are two different relationships one can have with probability.

The first is computational: you calculate odds explicitly, estimate expected utilities, and make decisions accordingly. This is rigorous. It is also passive. It takes the probabilities of the world as given and asks what to do with them.

The second is instrumental: you act to shift the probabilities in your favor. Before worrying about which bet to take, you work to change the structure of the game. You control what you can, reduce randomness where possible, and arrange your environment so that the odds improve. Often without computing explicit numbers at all.

Neither is complete without the other. Pure computation gives you accurate readings on a situation you have not shaped. Pure instrumentality gives you a favorable environment that you may misread. The most effective approach combines them: work to improve the structure of the odds, then calculate clearly within the situation you’ve created.

The people who are consistently effective at high-stakes decisions tend to be skilled at both — knowing when to stop accepting conditions and start engineering them, and knowing when to stop engineering and start accurately reading what’s in front of them.


Expected utility is the language of good decisions. But the conditions those decisions are made under — who sets the game, who defines the payoffs, who controls the information — are often more important than the calculations made within them.