👈

The Great Mental Models Volume 1

Author: Shane Parrish and Rhiannon Beaubien

Last Accessed on Kindle: Nov 07 2023

Ref: Amazon Link

In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality. We think better. And thinking better is about finding simple processes that help us work through problems from multiple dimensions and perspectives, allowing us to better choose solutions that fit what matters to us. The skill for finding the right solutions for the right problems is one form of wisdom.

Peter Bevelin, put it best: “I don’t want to be a great problem solver. I want to avoid problems—prevent them from happening and doing it right from the beginning.”

It’s hard to understand a system that we are part of because we have blind spots, where we can’t see what we aren’t looking for, and don’t notice what we don’t notice. « There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?” » David Foster Wallace

Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions.

The first flaw is perspective. We have a hard time seeing any system that we are in.

First, we’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism. This way we can always be right. Second, if we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas.

The third flaw is distance. The further we are from the results of our decisions, the easier it is to keep our current views rather than update them.

We also tend to undervalue the elementary ideas and overvalue the complicated ones.

Simple ideas are of great value because they can help us prevent complex problems.

Without reflection we cannot learn. Without learning we are doomed to repeat mistakes, become frustrated when the world doesn’t work the way we want it to, and wonder why we are so busy. The cycle goes on.

Understanding reality is the name of the game. Understanding not only helps us decide which actions to take but helps us remove or avoid actions that have a big downside that we would otherwise not be aware of. Not only do we understand the immediate problem with more accuracy, but we can begin to see the second-, third-, and higher-order consequences. This understanding helps us eliminate avoidable errors. Sometimes making good decisions boils down to avoiding bad ones.

Removing blind spots means thinking through the problem using different lenses or models. When we do this the blind spots slowly go away and we gain an understanding of the problem.

Here’s another way to look at it: think of a forest. When a botanist looks at it they may focus on the ecosystem, an environmentalist sees the impact of climate change, a forestry engineer the state of the tree growth, a business person the value of the land. None are wrong, but neither are any of them able to describe the full scope of the forest. Sharing knowledge, or learning the basics of the other disciplines, would lead to a more well-rounded understanding that would allow for better initial decisions about managing the forest.

Relying on only a few models is like having a 400-horsepower brain that’s only generating 50 horsepower of output. To increase your mental efficiency and reach your 400-horsepower potential, you need to use a latticework of mental models.

Reality is the ultimate update: When we enter new and unfamiliar territory it’s nice to have a map on hand. Everything from travelling to a new city, to becoming a parent for the first time has maps that we can use to improve our ability to navigate the terrain. But territories change, sometimes faster than the maps and models that describe them. We can and should update them based on our own experiences in the territory. That’s how good maps are built: feedback loops created by explorers.

Consider the cartographer: Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators. We can see this in the changing national boundaries that make up our world maps.

While navigating the world based on terrain is a useful goal, it’s not always possible. Maps, and models, help us understand and relate to the world around us. They are flawed but useful. In order to think a few steps ahead we must think beyond the map.

There is no shortcut to understanding. Building a circle of competence takes years of experience, of making mistakes, and of actively seeking out better methods of practice and thought.

You can’t operate as if a circle of competence is a static thing, that once attained is attained for life. The world is dynamic. Knowledge gets updated, and so too must your circle.

There are three key practices needed in order to build and maintain a circle of competence: curiosity and a desire to learn, monitoring, and feedback.

Learning comes when experience meets reflection. You can learn from your own experiences. Or you can learn from the experience of others, through books, articles, and conversations. Learning everything on your own is costly and slow. You are one person. Learning from the experiences of others is much more productive.

You need to monitor your track record in areas which you have, or want to have, a circle of competence. And you need to have the courage to monitor honestly so the feedback can be used to your advantage.

Keeping a journal of your own performance is the easiest and most private way to give self-feedback. Journals allow you to step out of your automatic thinking and ask yourself: What went wrong? How could I do better? Monitoring your own performance allows you to see patterns that you simply couldn’t see before.

You must occasionally solicit external feedback. This helps build a circle, but is also critical for maintaining one.

I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote or something. Their knowledge is so fragile! Richard Feynman

First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility.

It’s a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them.

The scientific method has demonstrated that knowledge can only be built when we are actively trying to falsify it

Our knowledge of first principles changes as we understand more.

If we never learn to take something apart, test our assumptions about it, and reconstruct it, we end up bound by what other people tell us—trapped in the way things have always been done.

Everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoin. So is love. The list goes on. If we want to identify the principles in a situation to cut through the dogma and the shared belief, there are two techniques we can use: Socratic questioning and the Five Whys.

Socratic questioning can be used to establish first principles through stringent analysis. This is a disciplined questioning process, used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance.

Socratic questioning generally follows this process: Clarifying your thinking and explaining the origins of your ideas. (Why do I think this? What exactly do I think?) Challenging assumptions. (How do I know this is true? What if I thought the opposite?) Looking for evidence. (How can I back this up? What are the sources?) Considering alternative perspectives. (What might others think? How do I know I am correct?) Examining consequences and implications. (What if I am wrong? What are the consequences if I am?) Questioning the original questions. (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?) Socratic questioning stops you from relying on your gut and limits strong emotional responses. This process helps you build something that lasts.

The goal of the Five Whys is to land on a “what” or “how”. It is not about introspection, such as “Why do I feel like this?” Rather, it is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your “whys” result in a statement of falsifiable fact, you have hit a first principle. If they end up with a “because I said so” or ”it just is”, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.

First principles thinking helps us avoid the problem of relying on someone else’s tactics without understanding the rationale behind them.

When we start with the idea that the way things are might not be the way they have to be, we put ourselves in the right frame of mind to identify first principles.

Thought experiments can be defined as “devices of the imagination used to investigate the nature of things.”

Thought experiments are powerful because they help us learn from our mistakes and avoid future ones. They let us take on the impossible, evaluate the potential consequences of our actions, and re-examine history to make better decisions. They can help us both figure out what we really want, and the best way to get there.

Much like the scientific method, a thought experiment generally has the following steps: Ask a question Conduct background research Construct hypothesis Test with (thought) experiments Analyze outcomes and draw conclusions Compare to hypothesis and adjust accordingly (new question, etc.)

One of the real powers of the thought experiment is that there is no limit to the number of times you can change a variable to see if it influences the outcome.

Few areas in which thought experiments are tremendously useful. Imagining physical impossibilities Re-imagining history Intuiting the non-intuitive

When we say “if money were no object” or “if you had all the time in the world,” we are asking someone to conduct a thought experiment because actually removing that variable (money or time) is physically impossible.

These approaches are called the historical counter-factual and semi-factual. If Y happened instead of X, what would the outcome have been? Would the outcome have been the same? As popular—and generally useful—as counter- and semi-factuals are, they are also the areas of thought experiment with which we need to use the most caution. Why? Because history is what we call a chaotic system. A small change in the beginning conditions can cause a very different outcome down the line.

Weather is highly chaotic. Any infinitesimally small error in our calculations today will change the result down the line, as rapid feedback loops occur throughout time. Since our measurement tools are not infinitely accurate, and never will be, we are stuck with the unpredictability of chaotic systems.

One of the goals of a thought experiment like this is to understand the situation enough to identify the decisions and actions that had impact.

One of the uses of thought experiments is to improve our ability to intuit the non-intuitive. In other words, a thought experiment allows us to verify if our natural intuition is correct by running experiments in our deliberate, conscious minds that make a point clear.

In order to improve our decision-making and increase our chances of success, we must be willing to probe all of the possibilities we can think of. Thought

The more you use them, the more you understand actual cause and effect, and the more knowledge you have of what can really be accomplished.

We often make the mistake of assuming that having some necessary conditions in place means that we have all of the sufficient conditions in place for our desired event or effect to occur.

What’s not obvious is that the gap between what is necessary to succeed and what is sufficient is often luck, chance, or some other factor beyond your direct control.

Assume you wanted to make it into the Fortune 500. Capital is necessary, but not sufficient. Hard work is necessary, but not sufficient. Intelligence is necessary, but not sufficient. Billionaire success takes all of those things and more, plus a lot of luck. That’s a big reason that there’s no recipe.

In mathematics they call these sets. The set of conditions necessary to become successful is a part of the set that is sufficient to become successful. But the sufficient set itself is far larger than the necessary set. Without that distinction, it’s too easy for us to be misled by the wrong stories.

Second-order thinking is thinking farther ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well. Failing to consider the second- and third-order effects can unleash disaster.

«Stupidity is the same as evil if you judge by the results.» Margaret Atwood

High degrees of connections make second-order thinking all the more critical, because denser webs of relationships make it easier for actions to have far-reaching consequences. You may be focused in one direction, not recognizing that the consequences are rippling out all around you. Things are not produced and consumed in a vacuum.

Second-order thinking teaches us two important concepts that underlie the use of this model. If we’re interested in understanding how the world really works, we must include second and subsequent effects. We must be as observant and honest as we can about the web of connections we are operating in.

Two areas where second-order thinking can be used to great benefit: Prioritizing long-term interests over immediate gains Constructing effective arguments

Second-order thinking involves asking ourselves if what we are doing now is going to get us the results we want.

Trust and trustworthiness are the results of multiple interactions. This is why second-order thinking is so useful and valuable. Going for the immediate payoff in our interactions with people, unless they are a win-win, almost always guarantees that interaction will be a one-off. Maximizing benefits is something that happens over time. Thus, considering the effects of the effects of our actions on others, or on our reputations, is critical to getting people to trust us, and to enjoy the benefits of cooperation that come with that.

Second-order thinking can help you avert problems and anticipate challenges that you can then address in advance.

Arguments are more effective when we demonstrate that we have considered the second-order effects and put effort into verifying that these are desirable as well.

Second-order thinking needs to evaluate the most likely effects and their most likely consequences, checking our understanding of what the typical results of our actions will be. If we worried about all possible effects of effects of our actions, we would likely never do anything, and we’d be wrong. How you’ll balance the need for higher-order thinking with practical, limiting judgment must be taken on a case-by-case basis.

We must ask ourselves the critical question: And then what?

Thinking through a problem as far as you can with the information you have allows us to consider time, scale, thresholds, and more. And weighing different paths is what thinking is all about. A little time spent thinking ahead can save us massive amounts of time later.

In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes.

The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions. Statisticians might call it a base rate, taking in outside information about past situations like the one you’re in.

Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation?

Common outcomes cluster together, creating a wave. The difference is in the tails. In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events.

The important thing is not to sit down and imagine every possible scenario in the tail (by definition, it is impossible) but to deal with fat-tailed domains in the correct way: by positioning ourselves to survive or even benefit from the wildly unpredictable future, by being the only ones thinking correctly and planning for a world we don’t fully understand.

You need to think about something we might call “metaprobability”—the probability that your probability estimates themselves are any good.

What are some ways we can prepare—arm ourselves with antifragility—so we can benefit from the volatility of the world? The first one is what Wall Street traders would call “upside optionality”, that is, seeking out situations that we expect have good odds of offering us opportunities.

The second thing we can do is to learn how to fail properly. Failing properly has two major components. First, never take a risk that will do you in completely. (Never get taken out of the game completely.) Second, develop the personal resilience to learn from your failures and start again. With these two rules, you can only fail temporarily.

The Antifragile mindset is a unique one. Whenever possible, try to create scenarios where randomness and uncertainty are your friends, not your enemies.

We notice two things happening at the same time (correlation) and mistakenly conclude that one causes the other (causation). We then often act upon that erroneous conclusion, making decisions that can have immense influence across our lives. The problem is, without a good understanding of what is meant by these terms, these decisions fail to capitalize on real dynamics in the world and instead are successful only by luck.

Whenever correlation is imperfect, extremes will soften over time. The best will always appear to get worse and the worst will appear to get better, regardless of any additional action. This is called regression to the mean, and it means we have to be extra careful when diagnosing causation.

In real life situations with the performance of specific individuals or teams, where the only real benchmark is the past performance and no control group can be introduced, the effects of regression can be difficult if not impossible to disentangle. We can compare against industry average, peers in the cohort group or historical rates of improvement, but none of these are perfect measures.

Inversion is a powerful tool to improve your thinking because it helps you identify and remove obstacles to success.

Most of us tend to think one way about a problem: forward. Inversion allows us to flip the problem around and think backward. Sometimes it’s good to start at the beginning, but it can be more useful to start at the end.

Think of it this way: Avoiding stupidity is easier than seeking brilliance. Combining the ability to think forward and backward allows you to see reality from multiple angles.

There are two approaches to applying inversion in your life. Start by assuming that what you’re trying to prove is either true or false, then show what else would have to be true. Instead of aiming directly for your goal, think deeply about what you want to avoid and then see what options are left over.

Instead of thinking through the achievement of a positive outcome, we could ask ourselves how we might achieve a terrible outcome, and let that guide our decision-making.

One of the theoretical foundations for this type of thinking comes from psychologist Kurt Lewin.10 In the 1930s he came up with the idea of force field analysis, which essentially recognizes that in any situation where change is desired, successful management of that change requires applied inversion. Here is a brief explanation of his process: Identify the problem Define your objective Identify the forces that support change towards your objective Identify the forces that impede change towards the objective Strategize a solution! This may involve both augmenting or adding to the forces in step 3, and reducing or eliminating the forces in step 4.

Anybody can make the simple complicated. Creativity is making the complicated simple. Charles Mingus

Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam’s Razor, a classic principle of logic and problem-solving. Instead of wasting your time trying to disprove complex scenarios, you can make decisions more confidently by basing them on the explanation that has the fewest moving parts.

Occam’s Razor is a general rule by which we select among competing explanations. Ockham wrote that “a plurality is not to be posited without necessity”—essentially that we should prefer the simplest explanation with the fewest moving parts.2,3 They are easier to falsify, easier to understand, and generally more likely to be correct.

Why are more complicated explanations less likely to be true? Let’s work it out mathematically. Take two competing explanations, each of which seem to equally explain a given phenomenon. If one of them requires the interaction of three variables and the other the interaction of thirty variables, all of which must have occurred to arrive at the stated conclusion, which of these is more likely to be in error? If each variable has a 99% chance of being correct, the first explanation is only 3% likely to be wrong. The second, more complex explanation, is about nine times as likely to be wrong, or 26%. The simpler explanation is more robust in the face of uncertainty.

The great thing about simplicity is that it can be so powerful. Sometimes unnecessary complexity just papers over the systemic flaws that will eventually choke us. Opting for the simple helps us make decisions based on how things really are.

One important counter to Occam’s Razor is the difficult truth that some things are simply not that simple.

Simple as we wish things were, irreducible complexity, like simplicity, is a part of our reality. Therefore, we can’t use this Razor to create artificial simplicity.

Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity. In a complex world, using this model helps us avoid paranoia and ideology. By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities. This model reminds us that people do make mistakes. It demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.

When we see something we don’t like happen and which seems wrong, we assume it’s intentional. But it’s more likely that it’s completely unintentional.

Failing to prioritize stupidity over malice causes things like paranoia. Always assuming malice puts you at the center of everyone else’s world. This is an incredibly self-centered approach to life. In reality, for every act of malice, there is almost certainly far more ignorance, stupidity, and laziness.

When we assume someone is out to get us, our very natural instinct is to take actions to defend ourselves. It’s harder to take advantage of, or even see, opportunities while in this defensive mode because our priority is saving ourselves—which tends to reduce our vision to dealing with the perceived threat instead of examining the bigger picture.

Ultimately, Hanlon’s Razor demonstrates that there are fewer true villains than you might suppose—what people are is human, and like you, all humans make mistakes and fall into traps of laziness, bad thinking, and bad incentives. Our lives are easier, better, and more effective when we recognize this truth and act accordingly.