Mental models are representations of some reality; they are necessarily abstracted and simpler than that reality. Mental models don't prescribe solutions, but they do provide a suggestive blueprint.
In computer science, classes exist for the same reasons. They're blueprints for objects and eliminate redundancy of process in the creation of object instances. They’re high-level representations with extensible properties like encapsulation, abstraction, and inheritance.
Mental models are the classes of our world. They manifest as heuristics to be applied when applicable. Instances of mental models are created when a sufficient threshold of behaviors occur to trigger your recall of the underlying model. It is object oriented thinking.
Mental models help people recognize patterns, understand context, and make more effective decisions. By identifying one in a real-life situation, you immediately understand a lot more about that situation and you can accomplish much more than you could from a place of pure observation. You know how to act accordingly — in the language of OOP, what methods are callable, what attributes exist, what behaviors are permissible. Successful identification of mental models affords higher level thinking where you’d normally have lower-level thinking. They are effective shortcuts. They are the building blocks you can use to not have to reason from scratch for every reasoned problem.
the fact that Newton would have failed had he not created an abstracted model for his observations to be applied in (near) universal scenarios. Cataloging every apple falling from the tree doesn’t scale like f=ma applied to any falling object.
Herbert Simon’s parable of two watchmakers (from his paper, "The Architecture of Complexity"). To create a clock of 1,000 pieces, consider two methods of construction:
Method 1: the watchmaker carefully assembles the watch 1 piece at a time, 1,000 times over. When the first watchmaker has to repair a watch, it must be reconstructed from single pieces.
Method 2: the watchmaker designed the watch in such a way that it could be constructed from ten sub-assemblies (100 pieces each). When the second watchmaker has to repair a watch, they identify the malfunctioning sub-assembly and replace that component of the watch.
There are significant advantages to the latter construction method - namely the resilience of sub-assemblies relative to monolithic assemblies. To repair a watch of sub-assemblies, only a sub-assembly is lost in error and the time to repair is far less.
There is a simplicity that results from the application of a model. It's much harder to catalog every apple's fall than it is to apply the model f=ma to predict trajectories. It's much harder to keep track of 1,000 individual pieces of a watch than it is to keep track of its 10 components. Where it is difficult to mentally maintain the complexity of specifics, nuance, and detail, mental models simplify that complexity into digestible components.
In a world of information inundation, mental models are frameworks to organize, classify, and act on that information without depleting the entirety of our mental bandwidth. Like object oriented programming, they afford consistency, coherence, and simplicity to complex systems.
“Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience both vicarious and direct on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.”
- Charlie Munger
Some of My Favorite Models:
Hormesis (from Toxicology): Abstracted, there can be beneficial responses from low exposure to stressors (within what is called the “hormetic zone”) that at high exposure would be deleterious & toxic (see Wikipedia). Relates to Taleb's Antifragility framework describing things that gain from disorder
1st Principles Reasoning: read Tim Urban's "The Cook and The Chef"
Ockham's Razor: the simplest explanation is most likely to be true
Hanlon's Razor: never attribute to malice what could be adequately explained by carelessness
Semmelweis Reflex: the reflex-like tendency to reject new evidence or new knowledge because it contradicts established norms, beliefs or paradigms
Optimistic Probability Bias: wanting something to be true so badly that you fool yourself into thinking it is likely to be true (because you are too optimistic about the probability of success)
Moral Hazard: taking on more risk with the information that encourages you to believe that you are more protected from that risk
Principal-Agent Problem: the self-interest of the agent may lead to suboptimal results for the principal. A prime example of incentive misalignment
Goodhart's Law: when a measure becomes a target, it ceases to be a good measure
Compound Interest: exponential thinking
So many problems today stem from asymmetry trying to reconcile exponential growth of technology outpacing human’s linear capacity to adapt.
December 28th 2018
1 Retweet5 Likes
Eisenhower Decision Matrix: x-axis (Urgent ←→ Not Urgent) vs. y-axis (Important ←→Not Important) which dictates whether to do, decide, delegate or delete.
Sayre's Law: in any dispute, the intensity of feeling is inversely proportional to the value of the issues at stake
Parkinson's Law of Triviality: organizations tend to give disproportionate weight to trivial issues
Opportunity Cost: the cost of a choice not taken
Leverage: listen to Naval's "New Leverage"
Pareto Principle: 80/20 Rule
Default Effect: most people accept the default option
Parkinson's Law: work expands so as to fill the time available for its completion
Hofstadter's Law: it always takes longer than you expect, even when you take into account Hofstadter's Law (also see Recursion)
Heuristic: sibling of mental models
Inertia: resistance to change in the current state
Strategy Tax: created when decisions are made by locking yourself into a long-term position. Organizations have existing agendas and have to fit their progress into their agenda - the compromise that results in the process is the strategy tax
Shirky Principle: institutions will try to preserve the problem to which they are the solution
Maslow's Hammer: if all you have is a hammer, everything looks like a nail
Potemkin Village: something specifically built to convince people that a situation is better than it actually is
Joy's Law: no matter who are, most of the smartest people work for someone else
Spacing Effect: learning effects are greater when that learning is spaced out over time
Regulatory Capture: a barrier to entry due to regulation whereby regulatory agencies get captured by special interest groups they are supposed to be regulating, ultimately protecting these entities from competition
Cargo Cult: belief that imitating what you see in more technologically advanced society will bring wealth/benefits of that society - without understanding how to behave in a way to achieve the desired result (see Feynman's Malaysia story)
Attribution & More Mental Model Resources:
Farnam Street - https://fs.blog/mental-models/
Super Thinking - https://superthinking.com/
Charlie Munger - A Lesson on Elementary Worldly Wisdom
Erik Torenberg - https://twitter.com/eriktorenberg