The Cobra Effect: When Good Incentives Go Bad
Part 2: Why people game the system
Streets of Hanoi were full of rats, so the French, who ruled at that time, created a simple incentive system: people who brought rat tails would get paid. It worked in the beginning, and the population of rats decreased, but then the French noticed something odd: many rats were running without tails.
People had figured out they could cut off tails but release the rats, so they could breed. Tails were paying bonuses. Eventually, the program was cancelled, but the rat population was higher than when they’d started.
It’s called “The Cobra Effect,” named after a similar (but likely made-up) story about British India and venomous cobras. The lesson remains the same: good intentions backfire when people game the system.
Last week, we looked at incentives to understand behaviour. Today, it’s about perverse incentives, good incentives that produce bad outcomes.
Goodhart’s Law
Recent examples of perverse incentives:
Enron, US oil giant: executives were rewarded based on reported profits. They created complex financial instruments to inflate their results. It looked great on paper, but the company collapsed.
Wells Fargo, US bank: employees were rewarded for opening new customer accounts. They opened a few million fake accounts using customers’ data without permission. The bank paid huge fines for that.
Reported profits, number of opened accounts, or collected rat tails, can become targets in themselves. This phenomenon has its name:
“When a measure becomes a target, it ceases to be a good measure”
— Goodhart’s law
How can we prevent incentives from going bad?
Orders of Consequences
Perverse incentives, by definition, are created out of goodwill, and intentions were good when they were set.
I had an urgent issue with not enough people helping with recruitment. I prepared a simple solution: gift cards for the most active recruiters. I thought it worked as we got more people helping. But engineers who were already busy with critical projects started spending hours on recruitment to win gift cards. It backfired on a few occasions.
I didn’t think through the second-order consequences: incentives for recruitment without considering what people would sacrifice to chase it. A rushed incentive system can work short-term, but lead to problems in the long run. The better solution was to select a group of engineers who were dedicated to help with recruitment, which I did to improve this process.
The mental model of second-order thinking can help with determining outcomes:
first-order consequences (more people helping recruitment),
second-order consequences (engineers sacrificing projects to win gift cards),
further n-th orders (too many people helping recruitment, nobody focused on products).
It’s about perceiving the situation as a system, not as a series of disconnected events that operate in a vacuum.
In practice, this means running a thought experiment before implementing any incentive: If people optimise solely for the metric, ask:
What will they sacrifice?
What happens next?
Final Thoughts
When we set incentives, we create a game. People will try to find the path of least resistance to win the game, and they are smarter than our metrics.
In Hanoi, they didn’t predict more rats, I didn’t predict how gift cards could shift the focus of engineers. We may not predict how teams try to game the next incentive.
However, it’s possible to design incentives as a winning game for every party involved. That’s in Part 3, next Thursday.
Perhaps the most important rule in management is “Get the incentives right”
— Charlie Munger
Thanks for reading,
— Michał
Post Notes
Discover Weekly — Shoutouts
Great articles which I’ve read recently:
Connect
LinkedIn | Substack DM | Mentoring | X | Bsky



