• ↑↓ to navigate
  • Enter to open
  • to select
  • Ctrl + Alt + Enter to open in panel
  • Esc to dismiss
⌘ '
keyboard shortcuts

Acquiring wisdom

“In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality. We think better. And thinking better is about finding simple processes that help us work through problems from multiple dimensions and perspectives, allowing us to better choose solutions that fit what matters to us. The skill for finding the right solutions for the right problems is one form of wisdom.”

Peter Bevelin, put it best: “I don’t want to be a great problem solver. I want to avoid problems—prevent them from happening and doing it right from the beginning.”

We must understand how the world works and adjust our behavior accordingly. Contrary to what we’re led to believe, thinking better isn’t about being a genius. It is about the processes we use to uncover reality and the choices we make once we do.

Keeping your feet on the ground

On the way to the Garden of the Hesperides, Heracles was to fight Antaeus as one of his 12 labors. After a few rounds in which Heracles flung the giant to the ground only to watch him revive, he realized he could not win by using traditional wrestling techniques. Instead, Heracles fought to lift him off the ground. Away from contact with his mother, Antaeus lost his strength and Heracles crushed him.

When understanding is separated from reality, we lose our powers. Understanding must constantly be tested against reality and updated accordingly. This isn’t a box we can tick, a task with a definite beginning and end, but a continuous process.

Getting in our own way

The biggest barrier to learning from contact with reality is ourselves. It’s hard to understand a system that we are part of because we have blind spots, where we can’t see what we aren’t looking for, and don’t notice what we don’t notice.

Or as The World Alive sang in Trapped “It’s hard to know myself trapped in my own head.”

Transclude of There-are-these-two-young-fish-swimming-along

Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions. They make it easier to keep our existing and flawed beliefs than to update them accordingly.

The first flaw is perspective. We have a hard time seeing any system that we are in. Galileo had a great analogy to describe the limits of our default perspective. Imagine you are on a ship that has reached constant velocity (meaning without a change in speed or direction). You are below decks and there are no portholes. You drop a ball from your raised hand to the floor. To you, it looks as if the ball is dropping straight down, thereby confirming gravity is at work.

Now imagine you are a fish (with special x-ray vision) and you are watching this ship go past. You see the scientist inside, dropping a ball. You register the vertical change in the position of the ball. But you are also able to see a horizontal change. As the ball was pulled down by gravity it also shifted its position east by about 20 feet. The ship moved through the water and therefore so did the ball. The scientist on board, with no external point of reference, was not able to perceive this horizontal shift.

This analogy shows us the limits of our perception. We must be open to other perspectives if we truly want to understand the results of our actions. Despite feeling that we’ve got all the information, if we’re on the ship, the fish in the ocean has more he can share.

The second flaw is ego. Many of us tend to have too much invested in our opinions of ourselves to see the world’s feedback—the feedback we need to update our beliefs about reality. This creates a profound ignorance that keeps us banging our head against the wall over and over again. Our inability to learn from the world because of our ego happens for many reasons, but two are worth mentioning here. First, we’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism. This way we can always be right. Second, if we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas.

The third flaw is distance. The further we are from the results of our decisions, the easier it is to keep our current views rather than update them. When you put your hand on a hot stove, you quickly learn the natural consequence. You pay the price for your mistakes. Since you are a pain-avoiding creature, you update your view. Before you touch another stove, you check to see if it’s hot. But you don’t just learn a micro lesson that applies in one situation. Instead, you draw a general abstraction, one that tells you to check before touching anything that could potentially be hot.

When we make decisions that other people carry out, we are one or more levels removed and may not immediately be able to update our understanding. We come a little off the ground, if you will. The further we are from the feedback of the decisions, the easier it is to convince ourselves that we are right and avoid the challenge, the pain, of updating our views.

At a high or macro level we are removed from the immediacy of the situation, and our ego steps in to create a narrative that suits what we want to believe, instead of what really happened.

The majority of the time, we don’t even perceive what conflicts with our beliefs. It’s much easier to go on thinking what we’ve already been thinking than go through the pain of updating our existing, false beliefs. When it comes to seeing what is—that is, understanding reality—we can follow Charles Darwin’s advice to notice things “which easily escape attention,” and ask why things happened.

We also tend to undervalue the elementary ideas and overvalue the complicated ones. The problem is then that we reject the simple to make sure what we offer can’t be contributed by someone else. But simple ideas are of great value because they can help us prevent complex problems.

Transclude of Most-geniuses-especially-those-who-lead-others

These elementary ideas, so often overlooked, are from multiple disciplines: biology, physics, chemistry, and more. These help us understand the interconnections of the world, and see it for how it really is. This understanding allows us to develop causal relationships, which allow us to match patterns, which allow us to draw analogies. All of this so we can navigate reality with more clarity and comprehension of the real dynamics involved.

Understanding only becomes useful when we adjust our behavior and actions accordingly.

Without learning we are doomed to repeat mistakes, become frustrated when the world doesn’t work the way we want it to, and wonder why we are so busy. The cycle goes on.

But we are not passive participants in our decisions. The world does not act on us as much as it reveals itself to us and we respond. Ego gets in the way, locking reality behind a door that it controls with a gating mechanism. Only through persistence in the face of having it slammed on us over and over can we begin to see the light on the other side.

Ego, of course, is more than the enemy. It’s also our friend. If we had a perfect view of the world and made decisions rationally, we would never attempt to do the amazing things that make us human. Ego propels us. Why, without ego, would we even attempt to travel to Mars? After all, it’s never been done before. We’d never start a business because most of them fail. We need to learn to understand when ego serves us and when it hinders us. Wrapping ego up in outcomes instead of in ourselves makes it easier to update our views.

Mental Models

up:: Decision-making, Wisdom, Knowledge, Learning


Mental Models

The Great Mental Models

  • Munger has a way of thinking through problems using what he calls a broad latticework of mental models. These are chunks of knowledge from different disciplines that can be simplified and applied to better understand the world. The way he describes it, they help identify what information is relevant in any given situation, and the most reasonable parameters to work in.

  • Mental models describe the way the world works. They shape how we think, how we understand, and how we form beliefs. Largely subconscious, mental models operate below the surface. We’re not generally aware of them and yet they’re the reason when we look at a problem we consider some factors relevant and others irrelevant. They are how we infer causality, match patterns, and draw analogies. They are how we think and reason.

  • A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks.

  • The fundamentals of knowledge are available to everyone. There is no discipline that is off limits—the core ideas from all fields of study contain principles that reveal how the universe works, and are therefore essential to navigating it. Our models come from fundamental disciplines that most of us have never studied, but no prior knowledge is required—only a sharp mind with a desire to learn.

  • There is no system that can prepare us for all risks. Factors of chance introduce a level of complexity that is not entirely predictable. But being able to draw on a repertoire of mental models can help us minimize risk by understanding the forces that are at play. Likely consequences don’t have to be a mystery.

  • Not having the ability to shift perspective by applying knowledge from multiple disciplines makes us vulnerable. Mistakes can become catastrophes whose effects keep compounding, creating stress and limiting our choices. Multidisciplinary thinking, learning these mental models and applying them across our lives, creates less stress and more freedom. The more we can draw on the diverse knowledge contained in these models, the more solutions will present themselves.

  • Using the lenses of our mental models helps us illuminate these interconnections. The more lenses used on a given problem, the more of reality reveals itself. The more of reality we see, the more we understand. The more we understand, the more we know what to do.

  • Simple and well-defined problems won’t need many lenses, we generally know what to do to get the intended results with the fewest side effects possible. When problems are more complicated, however, the value of having a brain full of lenses becomes readily apparent.

  • That’s not to say all lenses (or models) apply to all problems. They don’t. And it’s not to say that having more lenses (or models) will be an advantage in all problems. It won’t.

    Perhaps an example will help illustrate the mental models approach. We each have a mental model about gravity, whether we know it or not. And that model helps us to understand how gravity works. Of course we don’t need to know all of the details, but we know what’s important. We know, for instance, that if we drop a pen it will fall to the floor. If we see a pen on the floor we come to a probabilistic conclusion that gravity played a role.

    This model plays a fundamental role in our lives. We depend on it while understanding safety, design etc. But we also apply our understanding of gravity in other, less obvious ways. We use the model as a metaphor to explain the influence of strong personalities, as when we say, “He was pulled into her orbit.”

    Gravity has been around since before humans, so we can consider it to be time-tested, reliable, and representing reality. And yet, can you explain gravity with a ton of detail? I highly doubt it. And you don’t need to for the model to be useful to you. Our understanding of gravity, in other words, our mental model, lets us anticipate what will happen and also helps us explain what has happened.

    However, not every model is as reliable as gravity, and all models are flawed in some way. Some are reliable in some situations but useless in others. Some are too limited in their scope to be of much use. Others are unreliable because they haven’t been tested and challenged, and yet others are just plain wrong. In every situation, we need to figure out which models are reliable and useful. We must also discard or update the unreliable ones, because unreliable or flawed models come with a cost.

    When we use flawed models we are more likely to misunderstand the situation, the variables that matter, and the cause and effect relationships between them. Because of such misunderstandings we often take suboptimal actions.

    Sometimes making good decisions boils down to avoiding bad ones.

    Models that don’t hold up to reality cause massive mistakes. Consider that the model of bloodletting as a cure for disease caused unnecessary death because it weakened patients when they needed all their strength to fight their illnesses. It hung around for such a long time because it was part of a package of flawed models, such as those explaining the causes of sickness and how the human body worked, that made it difficult to determine exactly where it didn’t fit with reality.

    We compound the problem of flawed models when we fail to update our models when evidence indicates they are wrong. Only by repeated testing of our models against reality and being open to feedback can we update our understanding of the world and change our thinking. We need to look at the results of applying the model over the largest sample size possible to be able to refine it so that it aligns with how the world actually works.

    Most of us study something specific and don’t get exposure to the big ideas of other disciplines. We don’t develop the multidisciplinary mindset that we need to accurately see a problem. And because we don’t have the right models to understand the situation, we overuse the models we do have and use them even when they don’t belong.

    There is an old adage that encapsulates this: “To the man with only a hammer, everything starts looking like a nail.” Not every problem is a nail. The world is full of complications and interconnections that can only be explained through understanding of multiple models.

    To increase your mental efficiency and reach your potential, you need to use a latticework of mental models. Exactly the same sort of pattern that graces backyards everywhere, a lattice is a series of points that connect to and reinforce each other.

    What successful people do is file away a massive, but finite, amount of fundamental, established, essentially unchanging knowledge that can be used in evaluating the infinite number of unique scenarios which show up in the real world.

    It’s not just knowing the mental models that is important. First you must learn them, but then you must use them. Each decision presents an opportunity to comb through your repertoire and try one out, so you can also learn how to use them. This will slow you down at first, and you won’t always choose the right models, but you will get better and more efficient at using mental models as time progresses.

    Transclude of I-think-it-is-undeniably-true-that-the-human-brain-must-work-in-models

    You need to be deliberate about choosing the models you will use in a situation. As you use them, a great practice is to record and reflect. That way you can get better at both choosing models and applying them. Take the time to notice how you applied them, what the process was like, and what the results were.

    For instance, instead of falling victim to Confirmation bias, you will be able to step back and see it at work in yourself and others. Once you get practice, you will start to naturally apply models as you go through your life, from reading the news to contemplating a career move.

References

Link to original

“Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.” ― Peter Kaufman

The larger and more relevant the sample size, the more reliable the model based on it is. But the key to sample sizes is to look for them not just over space, but over time. You need to reach back into the past as far as you can to contribute to your sample. Looking to the past can provide essential context for understanding where we are now.

A group of blind people approach a strange animal, called an elephant. None of them are aware of its shape and form. So they decide to understand it by touch. The first person, whose hand touches the trunk, says, “This creature is like a thick snake.” For the second person, whose hand finds an ear, it seems like a type of fan. The third person, whose hand is on a leg, says the elephant is a pillar like a tree-trunk. The fourth blind man who places his hand on the side says, “An elephant is a wall.” The fifth, who feels its tail, describes it as a rope. The last touches its tusk, and states the elephant is something that is hard and smooth, like a spear.

Transclude of I-think-it-is-undeniably-true-that-the-human-brain-must-work-in-models
Transclude of Disciplines--like-nations--are-a-necessary-evil