thinking

Goodhart's Law

When a measure becomes a target, it ceases to be a useful measure. This generalises everywhere.

When a measure becomes a target, it ceases to be a useful measure.

Goodhart’s Law was coined to describe economic policy — central banks found that once they started hitting a specific monetary aggregate as an explicit target, the aggregate stopped predicting what it had previously predicted. The thing being measured changed in response to being measured.

It generalises everywhere.

A blog optimised for page views produces content engineered for page views. A school system optimised for test scores produces students who are good at tests. A hospital measured on wait times produces administrators who game wait times. In each case, the proxy that was useful as a diagnostic becomes useless — and actively distorting — once it becomes the goal. The original purpose gets hollowed out. You end up optimising hard for something that looks like the thing you wanted, but isn’t.

This happens because metrics are always simplifications. They capture one dimension of something multidimensional. That compression is fine when you’re using the metric to observe — it starts to break things when you use it to steer.

The antidote isn’t to stop measuring. It’s to hold metrics loosely — as diagnostic tools, not definitions of success. The moment you catch yourself asking “how do I improve the number?” instead of “how do I improve the thing the number is supposed to reflect?”, you’ve already slipped.

← All posts