Mental models are the building blocks of how we form beliefs, how we act, and and how we think. The first volume in a fantastic series that gives you actionable tools to build a better way of understanding how the world works.
Mental Models are chunks of knowledge from different disciplines that can be simplified and applied to better understand the world. They're simply a representation of how something works.
Relying on only a few models is like having a 400-horsepower brain that’s only generating 50 horsepower of output. To increase your mental efficiency and reach your 400-horsepower potential, you need to use a latticework of mental models.
In life and business, the person with the fewest blind spots wins.
Our failures to update our beliefs from interacting with reality spring from three things:
Sometimes making good decisions boils down to avoiding bad ones.
The larger and more relevant the sample size, the more reliable the model based on it is. But the key to sample sizes is to look for them not just over space, but over time. We have a tendency to think that how the world is, is how it always was. And so we get caught up validating our assumptions from what we find in the here and now. But the continents used to be pushed against each other, dinosaurs walked the planet for millions of years, and we are not the only hominid to evolve. Looking to the past can provide essential context for understanding where we are now.
A quick glance at the Nobel Prize winners list show that many of them, obviously extreme specialists in something, had multidisciplinary interests that supported their achievements.
What successful people do is file away a massive, but finite, amount of fundamental, established, essentially unchanging knowledge that can be used in evaluating the infinite number of unique scenarios which show up in the real world.
Maps help us reduce complexity, but the description of the thing is not the thing itself.
1. Reality Is The Ultimate Update
A map is worthless if it doesn't accurately represent reality. If reality changes, the map should change. Don’t blame reality for not matching your expectations. Blame yourself for not adjusting when reality does.
2. Consider The Cartographer
Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators. Maps are most useful when you consider the biases, incentives, and influences the cartographer had.
3. Maps Can Influence Territories
Centrally-planned cities like Brasilia can look great and beautifully-designed on a map, but be an absolute nightmare to traverse at ground-level. Issues arise when city planners design a model and then try to fit their cities into the model.
Tragedy of the Commons
Common resources, such as common grazing land, get used more than is desirable from the standpoint of society as a whole. Every person will seek to maximize their gain. Everyone will ask "What is the utility to me of adding one more animal to my herd?” Since the herdsman receives all the proceeds from the sale of the additional animal, he stands to benefit a lot. (Positive utility of +1) Since, however, the effects of overgrazing are shared by everyone, the personal downside to adding one extra animal is only a fraction of the upside. And so each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Similar things happen in overfishing, mining, carbon emissions, and most other major problems humanity faces in the 21st century.
Imagine an old man who’s spent his entire life up in a small town. He’s the Lifer. No detail of the goings-on in the town has escaped his notice over the years. Now imagine a Stranger enters the town, in from the Big City. Within a few days, the Stranger decides that he knows all there is to know about the town. The difference between the detailed web of knowledge in the Lifer’s head and the surface knowledge in the Stranger’s head is the difference between being inside a circle of competence and being outside the perimeter.
Within our circles of competence, we know exactly what we don’t know.
In Alexander Pope’s poem “An Essay on Criticism,” he writes:
A little learning is a dangerous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.”
Building & Maintaining A Circle
Part of the Circle of Competence means that you know when you’re not the best person to make the decision and you can allow someone else with a comparative advantage in this area to make the decision.
Operating Outside A Circle Of Competence
if you can’t prove something wrong, you can’t really prove it right either.
Look at the worst events in history. We tend to assume that the worst that has happened is the worst that can happen, and then prepare for that. We forget that “the worst” smashed a previous understanding of what was the worst. Therefore, we need to prepare more for the extremes allowable by physics rather than what has happened until now.
First Principles help us clarify problems by separating the underlying ideas or facts from our assumptions. What remain are the essentials. If you know the first principles, you can build the rest of your knowledge around them to produce something new. They are the foundation and thus will be different in every situation, but the more we know, the more we can challenge.
When it comes down to it, everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoin. So is love. The list goes on.
You can change the tactics if you know the principles.
The Five Whys
Simply keep asking “why?” The goal is to land on a “what” or “how”. It is not about introspection, such as “Why do I feel like this?” Rather, it is about systematically delving further into so you can separate reliable knowledge from assumption. If your “whys” result in a statement of falsifiable fact, you have hit a first principle.
A thought experiment generally has the following steps:
Imagine Physical Impossibilities
When we say “if money were no object” or “if you had all the time in the world,” we are asking someone to conduct a thought experiment because actually removing that variable (money or time) is physically impossible. In reality, money is always an object, and we never have all the time in the world. But it can result in great insights on your priorities or wishes.
The historical counter-factual and semi-factual. If Y happened instead of X? What if I hadn’t been stuck at the airport bar where I met my future business partner? Would World War I have started if Gavrilo Princip hadn’t shot the Archduke of Austria in Sarajevo? If Cleopatra hadn’t found a way to meet Caesar, would she still have been able to take the throne of Egypt?
The more scenarios you can imagine where some result comes to pass without a specific event, the weaker the case for that event being the critical cause.
The Veil Of Ignorance
Designers of society should operate behind a veil of ignorance. They could not know who they would be in the society they were creating. If they wouldn't know their economic status, ethnic background, talents and interests, or even their gender, they would have to put in place a structure that was as fair as possible in order to guarantee the best possible outcome for themselves.
What kind of company policies on hiring, office etiquette, or parental leave policies would you design if you didn’t know what your role in the company was? Or even anything about who you were?
Second-order thinking is thinking farther ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well. Also known as the "Law of Unintended Consequences”
Being aware of second-order consequences may mean the short term is less spectacular, but the payoffs for the long term can be enormous. By delaying gratification now, you will save time in the future. You won’t have to clean up the mess you made on account of not thinking through the effects of your short-term desires.
Arguments are more effective when we demonstrate that we have considered the second-order effects and put effort into verifying that these are desirable as well.
Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass.
Thomas Bayes and Bayesian Thinking
The core of Bayesian thinking is this: We should take into account what we already know when we learn something new. Every piece of new info has to be placed in the wider framework of what we already know, rather than taking it as absolute fact. A doubling of the murder rate doesn’t sound as scary when you know you’re in a country with one murder per year
Any new information you encounter that challenges a prior belief simply means that the probability of that prior being true may be reduced. Eventually some priors are replaced completely.
In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events.
The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average.
“Metaprobability” — the probability that your probability estimates themselves are any good.
Any small error in measuring the risk of an extreme event can mean we’re not just slightly off. Not just 10% wrong but ten times wrong, or 100 times wrong, or 1,000 times wrong. Something we thought could only happen every 1,000 years might be likely to happen in any given year!
“Upside optionality” is, seeking out situations that we expect have good odds of offering us opportunities. Take the example of attending a cocktail party where a lot of people you might like to know are in attendance. While nothing is guaranteed to happen—you may not meet those people, and if you do, it may not go well— you give yourself the benefit of serendipity and randomness. The worst thing that can happen is...nothing. One thing you know for sure is that you’ll never meet them sitting at home. By going to the party, you improve your odds of encountering opportunity.
Far more probability estimates are wrong on the “over-optimistic” side than the “under-optimistic” side. You’ll rarely read about an investor who aimed for 25% annual return rates who subsequently earned 40% over a long period of time. How often do you leave “on time” and arrive 20% late? All the time? Exactly. Your estimation errors are asymmetric, skewing in a single direction.
Avoiding stupidity is easier than seeking brilliance.
Instead of aiming directly for your goal, think deeply about what you want to avoid and then see what options are left over.
Set Your Assumptions
Start by assuming that what you’re trying to prove is either true or false, then show what else would have to be true.
Bernays did not ask, “How do I sell more cigarettes to women?” Instead, he wondered, if women bought and smoked cigarettes, what else would have to be true? What would have to change in the world to make smoking desirable to women and socially acceptable? Then—a step farther—once he knew what needed to change, how would he achieve that? He thought about what the world would look like if women smoked often and anywhere, and then set about trying to make that world a reality. Once he did that, selling cigarettes to women was comparatively easy.
Focus On What You Want To Avoid
Avoiding being poor is a lot more important than getting rich. The index fund operates on the idea that accruing wealth has a lot to do with minimizing loss.
We could ask ourselves how we might achieve a terrible outcome, and let that guide our decision-making.
Simpler explanations are more likely to be true than complicated ones.
“When you hear hoofbeats, think horses, not zebras.”
Take two explanations. One of them requires three variables and the other thirty variables to match an exact pattern. Which is more likely?
We should not attribute to malice that which is more easily explained by stupidity. The explanation most likely to be right is the one that contains the least amount of intent. Most people doing wrong are not bad people trying to be malicious.
“80 or 90 important models will carry about 90% of the freight in making you a worldly-wise person. And, of those, only a mere handful really carry very heavy freight.” — Charlie Munger
"I think it is undeniably true that the human brain must work in models. The trick is to have your brain work better than the other person’s brain because it understands the most fundamental models: ones that will do most work per unit. If you get into the mental habit of relating what you’re reading to the basic structure of the underlying ideas being demonstrated, you gradually accumulate some wisdom." — Charlie Munger
"As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble." — Harrington Emerson
"Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." — George Box
"Disciplines, like nations, are a necessary evil that enable human beings of bounded rationality to simplify their goals and reduce their choices to calculable limits. But parochialism is everywhere, and the world badly needs international and interdisciplinary travelers to carry new knowledge from one enclave to another." — Herbert Simon
"Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities." — Andy Benoit
"I believe in the discipline of mastering the best of what other people have figured out." — Charlie Munger