Quora is one of my favorite websites. It’s chock full of amusing, insightful answers to questions about every subject under the sun. Sometimes I write answers, and they’re all about learning, memory and how the brain works.
Answering questions is generally something I enjoy doing, but lately I’ve been getting a deluge of requests for answers to questions that can all be summarized like this:
“How do I recall every single detail from every single moment of my life without any effort on my part?“
(Bonus points if they use “lifehack” somewhere in the question.)
Occasionally I’ll bite on a question like this if it’s got some nuance I want to explore, and I do see it as a good thing when certain misconceptions about the brain are cleared up.
But I’ve been thinking long and hard about why these questions bother me so much. At first, it was a sort of smug “Really, guys? You think the brain works that way?”
It struck me as odd that people would ask questions like this when there’s so much good information out there to show them the error of their ways.
Now I believe I have an answer, and it opened up a surprisingly deep rabbit hole.
The generally-accepted definition of a system is a group of components that, combined, provide functionality that the components cannot provide on their own. It’s a bit vague, but the general idea is simple to grasp: things working together to get unique results.
Nearly everything around you can be broken down into systems in one way or another.
An immediately understandable example of a system is the computer you’re using to read this text. If you open up a computer and look at the various bells and whistles inside, you’ll quickly grasp how that definition of systems works. Your computer’s power supply doesn’t really do much if it isn’t hooked up to the motherboard.
The motherboard by itself is worthless without the power supply, processor, hard drive and everything else that plugs into it. Your mouse would just be a strangely ergonomic piece of plastic without a computer with a functioning power supply and motherboard.
All of these parts are interdependent and are only of use to you when they’re all combined in specific ways.
Your car is another easy way to think about systems. A tire standing on its own might provide some enjoyment if you roll it down a hill (especially if it’s a tractor tire and it goes through someone’s living room – I’m looking at you here, Dad).
But combined with 3 other tires, an axle, motor and all the other parts of a functioning car, and suddenly that tire is an extremely useful piece of equipment.
To take it even further, you can see systems when you are looking at that individual tire as it careens down the slope. The tire is actually a small system of parts that, once again, on their own don’t mean much.
A dollop of rubber and some metal bits don’t offer any real utility. Bring them together and you’ve got something that can be attached to a more complex system, like a car or an airplane.
The problem most people have with systems is that, outside of simple examples like the ones I give above, trying to wrap their minds around larger and less transparent systems becomes a real chore. Try, for example, to consider how the United States military is organized.
It has so many different avenues of analysis (due to its organizational levels, political conflicts, budgetary considerations, etc.) that a single person is not likely to come up with an accurate idea of how the overall system functions.
This is because of another closely related concept known as complexity.
Wikipedia currently has a hilarious definition of complexity:
There is no absolute definition of what complexity means; the only consensus among researchers is that there is no agreement about the specific definition of complexity. However, a characterization of what is complex is possible. Complexity is generally used to characterize something with many parts where those parts interact with each other in multiple ways.
Note: As of today (November 13th, 2015), the article has a warning that it might need to be entirely rewritten to meet quality standards. It’s not hard to see why.
Let’s use a simple, but concrete, definition of complexity: the number of components you have to explore within a system before you can figure out what’s not working.
To make this properly sink in, consider this question: how would you diagnose a problem with the Wright Flyer? Then consider this: how would you diagnose a problem with a 747?
The Wright Flyer was the original powered aircraft which took flight in 1903. From a 21st century perspective, this was a very simple (non-complex) machine. It was hand-built by the Wright brothers out of wood, wire and fabric.
Steering was accomplished by flexing the wing surfaces and power was provided by a pair of rudimentary combustion engines that each had a rear-facing propellor attached. Taking off was a matter of placing the Flyer on a track and launching it.
Finding a suitable landing spot was pretty much just finding any place that wasn’t the ocean, a forest or sheer cliff.
The Flyer was a simple system because it had parts that needed to be combined in order to provide a unique function (powered flight), but if something went wrong it was pretty obvious.
Wings not flexing? Probably a broken or badly configured wire. Not moving forward? That’s because one of the props isn’t spinning. And so on.
There aren’t that many different pathways within the system to begin with, so pinpointing issues isn’t that much of a task. The problem could still be serious – “hey, the wing isn’t flexing and I’m 150 feet off the ground…” – but not opaque.
Boeing’s massive 747 airliner is an entirely different story. Rather than use a bunch of superlatives to convince you of how complex it is, I’ll just let you read this:
The Boeing 747 design was such a departure from the technology then in use that it took 75,000 engineering drawings and 15,000 hours of wind tunnel testing to come up with the first Boeing 747 prototype. Five aircraft then completed a ten month 1,500 flying hour test program to gain certification for commercial operation.
The end result is an aircraft that looks like this on the outside:
And this on the inside:
Now consider how you would diagnose a problem within that system.
This actually popped up in my everyday life about a week ago. When I realized I wasn’t going to be able to walk to something due to other items on my schedule, I booked an Uber and was picked up by a friendly, recently-retired aircraft mechanic.
He’d started working in the 1970s for a now-defunct airline that was absorbed by a major American carrier, and told me he’d tooled around on basically every major jet that’s flying today. Since I’m weird and think about systems all the time, I asked him about how problems were found and diagnosed.
He wrinkled his nose, adjusted his glasses, then thought for a moment. “Well,” he said, “sometimes the pilots would tell us they’d hear a knocking sound when they were in a certain stage of flight. We’d always joke around and ask if anyone had snuck aboard.
If there was a light or something, we might have had an idea about what the problem was. But a lot of the time, we’d look and couldn’t find anything. So we’d just shrug our shoulders and tell the pilots to let us know if it kept happening.”
At first glance, that might sound horrifying. “Knocking sound? Meh, forget about it.” But the reality is that airliners have so many interconnected parts that diagnosing problems is a spectrum, from “we have to figure this out” to “whatever, the plane still works.”
They obviously can’t ignore a problem like “hey, this engine won’t start”, but they have to seriously weigh the pros and cons when the problem is non-critical and ambiguous.
Mechanics are expensive, and airlines don’t want to pay them to hunt down problems that aren’t going to cause operational problems (and therefore make shareholders unhappy).
This last point brings us to a critically important part of analyzing systems: information.
Since we’re already riding on the “simple definitions” gravy train, I’ll make this one easy to digest as well: information is anything that reduces uncertainty. Information is often viewed as any kind of data, but in this context that is not correct.
In a system, the ease of finding out what’s wrong (complexity) is often determined by how well information flows from the system to a user.
With the Wright Flyer, information about what’s wrong flows directly to the pilot because pretty much everything is within view of the pilot’s seat. The pilot can look around and determine very quickly where a problem is because there isn’t anything blocking the flow of information.
He (or she) can feel a loss of power and turn around to see if one of the props stopped spinning. They will be able to see a broken wire and figure out right then and there why they aren’t able to turn as well as they should.
The pilots of a 747 have little lights in the cockpit that can provide clues to problems, but information is pretty limited in many situations. They might not have any idea about what caused a problem even after they’ve landed and a skill mechanic has taken a look.
Even in the more obvious situations like an engine going out mid-flight introduce quite a few questions that the pilots can’t answer. They know that the engine isn’t working, but they almost certainly don’t know why it went out.
Data doesn’t really become information unless it’s within some kind of context or system.
If someone walked up to you and randomly started going through the various ways a 747 engine could fail, you’d be confused and probably want to walk slowly away from that person. But if you’re a pilot flying a 747 with a dead engine and your co-pilot says those same things to you, that piece of data is now information.
Within the context of that situation, it reduced uncertainty and gave you a better idea of why the engine died (even though you might not be able to do anything about it). As a non-pilot, it might as well be gibberish.
This is an important distinction to consider for anyone who uses the internet. The amount of sheer data is overwhelming, but the amount of useful, uncertainty-reducing information is not quite as clear.
A good system optimizes information flow as much as possible. For example, the reason dealing with big corporations and governments is such a pain in the ass is primarily because they have high levels of complexity, and in turn their information flows are weak or nonexistent.
Someone has to fill out Form A, hand it off to Person B, hope that Person B files it with Department C and then pray that Department C gives them an answer sometime between now and when the Sun devours the Earth.
Startups are often much better vehicles for information flow because they either don’t have a high level of complexity (a few guys in a garage) or they embrace more efficient structures that allow information to move easily between different business units.
The general idea is this: without optimization, big systems tend to get bogged down and processes crawl to a halt. With optimization, even large systems can become maximally efficient, even if they have certain constraints imposed by their intrinsic complexity.
Now that we have some key concepts covered, let’s go back to why I was getting irritated by people’s questions about having a perfect memory. After thinking about this much more than I probably should have, I realized that the real problem isn’t necessarily that people are lazy or stupid (although you can’t always rule those out).
Instead, the root issue is a lack of awareness of the systems within their own minds. When they considered how they might become better thinkers, they avoided thinking about their internal systems and instead sought out a single piece of information.
This is an unfortunately common sentiment, and self-help gurus have been making buckets of money off of it for a long time. The idea is appealing: “Just learn this one thing and you’ll have the life of your dreams!” Who doesn’t like the sound of that? Even I think it would be pretty kick-ass to have such an Earth-shattering piece of information.
Even though I can’t give a concrete reason for why this is, my own hypothesis is that people view their brains as mystery machines that are just sort of there. Most people seem aware of the complexity of brains (and yes, they are very complex) and come up with a variety of folksy explanations for how they work.
This is where many intuitive-sounding-but-incorrect ideas about the brain (such as “learning styles,” photographic memory,” and “you only use 10% of your brain”) come from. Mix this in with metaphysics and/or religion and the explanations get even further from the truth.
It is true that there are a great many things we still need to figure out about our brain. The origins of our thoughts and the inner-workings of cognition are still being worked on diligently. Philosophers and neuroscientists are still fighting a vicious battle over the nature of consciousness. There are some things about our brains that we may never fully understand.
But we do know more than most people realize, based on studies of both internal structure (neuroscience) and external behavior (psychology and cognitive science). Out of this work, we can get a decent idea about how information moves around in that oh-so-complex system we call a brain.
And this is where the big problem lies. People have this dread of trying to understand the brain because it is complex and there are ambiguous, often conflicting, ideas about some core functions. So rather than trying to look at the system, understand information flow within said system and then try to find ways to optimize that flow, they want to place information into a black box and hope it works correctly.
To me, this is like strapping a jet engine (yes, another aviation analogy!) onto a horse in order to break the sound barrier. That horse carcass might cause a sonic boom, but you aren’t getting any more utility out of the horse.
If you want to replicate the result, you’ll need to take out another horse. It’s a Phyrric victory: yes, that horse went supersonic. No, you cannot use that horse for anything else ever again.
You’ve failed to grasp the constraints of the system you’re using, and are doomed to failure. If you had a more realistic goal, such as “make this horse go 5mph faster,” you might be able to do that. It might take time, lots of oats and maybe even some steroids, but the system would probably allow it.
Full disclosure: I don’t know anything about horses, so maybe 5mph is a stretch. You get the idea.
With this in mind, let’s consider the question I mentioned at the beginning of the article. People that ask this question are, in essence, saying the following:
“Hey, I’ve got this perfectly good horse, let’s strap an afterburner on this shit!”
Yes, this actually existed before I wrote this post. I thought, “Hey, wouldn’t it be funny if I made an image of a jet horse? Before doing that, I’ll check Google Images in the unlikely event someone’s already made it.” The internet is a weird place.
Information needs to be managed within the context of a system. If you want to remember more of the information you come across, you need to understand the system you’re trying to push it through. This includes understanding the constraints of the system so that you can know ahead of time whether an action is possible or not.
People who ask how they can get a photographic memory haven’t taken the time to realize that A) it’s not possible because of how the various systems in the brain work, and B) having shitloads of irrelevant information wouldn’t be that useful anyway. Anyone who’s taken the time to study and appreciate the systems of thought we all possess wouldn’t even bother asking such a question.
If you’ve ever tried to learn a new sport, you’ll be familiar with this phenomenon. For example, on my first day of boxing the instructor held up a pair of focus mitts (which I didn’t know were called focus mitts) and said “OK, give me a 1-2-3.”
The concept of “1-2-3” didn’t mean anything to me; was this a combination, a single punch, a series of steps? Because I didn’t have any mental systems in place that were capable of making sense of the information, it was entirely meaningless to me. Now that I’ve done it for a while, I can do “1-2-3” (which is a combination) without thinking too much about it.
In fact, after taking the time to develop my internal “boxing” systems, I can take on new information and make use of it rapidly. An instructor doesn’t need to explain to me why hip movement is important when throwing a hook or what a jab is – the system is in place and new information simply gets absorbed into what I already know.
Knowing what I know about learning, I walked into the boxing gym with the mindset that I wasn’t going to know anything and would spend most of the session there making a fool of myself.
The constraints were pretty clear: I knew nothing, and trying to take on advanced knowledge would not be a good idea. It would be a waste of the instructor’s breath.
If I’d walked in and tried to spar with an expert (which would be clear case of not respecting constraints), I would have probably gotten a concussion or two for my arrogance.
When you want to learn something, don’t be the guy that wants to spar on day one. Get your systems in order and then step into the ring.
Information is important, but without systems and the context they provide, it’s just data. If you take the time to learn the ins and outs of systems, you’ll be able to utilize information in ways you never thought possible.
You can integrate what you learn into your everyday life because you have taken the crucial step of identifying both the benefits and shortcomings of your brain.
I’ll even go a step further and say this: if you understand and optimize your systems, the information you accumulate is almost irrelevant. This is because a well-tuned system can adapt and take on new information in a highly optimized manner.
Someone else without that systems-centric thought process is just going to have a jumbled mess of data that they might be able to conjure pseudo-insights out of for cocktail parties.
If you want to have an edge, study your systems. The information will come, and be more useful, afterwards.
Sign up for my one-of-a-kind newsletter that’s read by over 1,000 people and I’ll send you my free, 7-part Learning Basics course.