• Skip to primary navigation
  • Skip to main content
52 Aces

52 Aces

Learning, competition and capitalism

  • Start Here
  • Books
  • Courses
  • Newsletter
  • Writing
  • Reading List
  • About
  • Contact

Science

How to Talk About Algorithms

Ace Eddleman

This is part of The Algorithmic Society, an ongoing series about how algorithms and algorithmic thinking have taken over the world. Want to know when new content shows up? Sign up for my newsletter here.

This invasion of one’s mind by ready-made phrases can only be prevented if one is constantly on guard against them, and every such phrase anaesthetizes a portion of one’s brain. 

-George Orwell, Politics and the English Language

The term “algorithm” has become the latest linguistic tool for sounding sophisticated when talking about technology, a sort of TED Talk-esque shortcut to identifying with the Silicon Valley set. There’s something about the word itself that mystifies the average mind and imbues the user with an air of sophistication.

When someone at a cocktail party starts using the world “algorithm,” it becomes evident that this person is well-read and keeping up with the times. In other words, it’s become another way to signal status by leveraging (the appearance) of technical knowledge.

The word itself has become a stand-in for the god-like power that the largest internet platforms hold over our daily lives, a device for describing the black boxes behind behemoths like Google and Facebook. How we use the term “algorithm” hints at a sort of cowed awe at the sheer magnitude of their impact on the modern world.

We don’t know how they operate or who really pulls the levers (I imagine someone thinking are there levers on algorithms? as they read this) behind the curtain. Owners of algorithms are fine with this, because it makes their lives easier — they get to keep trade secrets to themselves, and they’re given a veneer of respectability in the process.

Ian Bogost described this dynamic best:

The next time you hear someone talking about algorithms, replace the term with ‘God’ and ask yourself if the meaning changes.

-Ian Bogost

Part of this has to do with the connection between algorithms and the gargantuan, public fortunes they’ve created in the era of high technology. There’s a new sort of American dream associated with algorithms, most famously captured in the movie The Social Network.

There’s a scene in that film where Eduardo Saverin and Mark Zuckerberg (played by Andrew Garfield and Jesse Eisenberg, respectively) are working on an algorithm by writing on a window.

Eduardo Saverin (Andrew Garfield) works on an algorithm with Mark Zuckerberg (Jesse Eisenberg)
Eduardo Saverin (Andrew Garfield) works on an algorithm with Mark Zuckerberg (Jesse Eisenberg)

While the algorithm in this scene is for a hacked-together pet project, the implication of the context is clear: algorithms with humble beginnings can conquer the world. Multiple billion-dollar fortunes were spawned by this primordial bit of mathematics that got translated into code.

Now algorithms are all over our cultural landscape, which is itself dominated by business narratives and the investors who drive them.

For example, it is now unimaginable to create a startup that didn’t offer some kind of algorithm that’s at least a minor improvement over another, existing algorithm. Not only is it not cool, but venture capitalists tend to shy away from such businesses because they lack scale. 

In short, algorithms are ever-present in our social media, on our phones, and in every domain imaginable that involves what could be considered “technology.” Algorithms are everywhere, algorithms are in everything, algorithms are everything.

Marc Andreesen famously said “software is eating the world,” but I would argue that the more correct phrasing is “algorithms have already eaten the world.”

Defining Algorithms

An algorithm in action
Image source

There’s an odd contradiction at the heart of our views on algorithms. On the one hand, we all seem to understand (to varying degrees) that algorithms play an outsized role in our lives. On the other, nobody seems to know what the word “algorithm” means.

What’s even weirder about this is that this definitional problem isn’t confined to the tech-illiterate: even computer scientists don’t have a generally-accepted definition of what an algorithm is.

It’s a long-standing debate, and some even say it’s not possible to sort algorithms from non-algorithms because of a computer science concept known as the halting problem.

This ambiguity can be viewed as a positive or a negative, depending on the algorithm and how you relate to it. But that’s all part of a larger exploration that I’ll get into later. For now, we need to come up with some kind of starting point for discussing algorithms that makes sense.

Since there isn’t a single, unified definition of an algorithm, we’ll have to use a simplified (and therefore flawed) one for now:

A set of well-defined steps for solving a specific class of problems.

What’s nice about this definition is that it gives us the ability to generalize the idea of an algorithm beyond its mathematical and computational origins. We can use it to describe any process that’s designed to operate in a repeatable, predictable manner.

We could say, for example, that low-tech activities like cooking involve the use of algorithms. After all, a recipe is a set of unambiguous steps (add ½ cup sugar, bake for 25 minutes, etc.) for “solving” specific food-related problems (how to convert ingredients into food, which in turn solves the problem of being hungry, etc.).

Bureaucratic procedures at a large company or government organization are also algorithmic if we run with this definition.

An employee can also be seen as a sort of algorithm as well — their whole job is to perform a specific set of tasks in order to accomplish goals for the organization. They do this by performing algorithms for each problem they’re presented with.

A Transformative Force

It’s also worth adding another dimension to this definition:

Algorithms transform some set of input values into a desired output or set of outputs.

This is key to understanding all thing algorithmic. Algorithms transform what they take in and generate something novel in the process.

In some sense, this is the most important way to look at algorithms. It makes you realize that there’s some objective involved, that what the algorithms are creating isn’t just math — they’re machines of creation.

Algorithms aren’t just transforming inputs from computer systems, either. As they integrate more and more with physical objects (including people), they are using the real world itself as a set of inputs and creating new landscapes in their wake.

Now we can start to unravel the linguistic consequences of using the word “algorithm” the way we do. By talking about algorithms as our digital overlords — never to be questioned or examined in a meaningful way — we hand them power they don’t deserve.

Even though they’re designed by high-caliber computer scientists, these algorithms are still designed and operated by humans. This means they’re flawed in countless ways, and even our most powerful computers can’t rid themselves of their designer’s human errors.

When you embrace that fact, it becomes clear that talking about algorithms with a glint of admiration in our eyes is often a mistake. While some are worthy of praise, quite a few are far more fragile, inaccurate and exploitable than their owners would like you to know.

More than anything, we need to get rid of this idea that algorithms are simply hand-waving mechanisms for explaining new technology. Algorithms are real, they serve specific purposes and it’s possible to at least begin to understand how they operate if you equip yourself to probe them.

Their impact, despite their flawed nature, is large. We’ve built, and continue to build, an algorithmic society, and it’s simply irresponsible to treat algorithms in such a haphazard way. It is the duty of every intelligent, capable adult in this new world to get a handle on what algorithms are and how they are shaping our world.

And this starts by learning to talk about algorithms not as magical code-driven dragons. It starts by seeing how they’re infiltrating not just our computers, but our very identities, our everyday existence.

They are fractal, spinning themselves through increasing levels of abstraction as they generate billions of dollars and shift our personal lives in ways that even their creators often don’t understand.

Algorithms are, in short, the most important topic of the modern era. It is my goal with this series to give you a glimpse into just how large of an impact they’re having, and then provide you with the tools you need to navigate this world more intelligently.

There will be more to learn, but consider this the starting point.

The Exploration-Exploitation Dilemma, Simplified

Ace Eddleman

This is part of my 5 Minute Concepts series, which is designed to help you understand fundamental concepts about subjects like learning, memory and competition in the shortest time possible. Each episode is available in video format on my YouTube channel and audio via my podcast. If you prefer to read, the transcript is below.

Want to know when new content shows up? Sign up for my newsletter here.

Transcript:

I’ve written about the exploration-exploitation dilemma before, but only in a long-form essay format. Since I think this is such a critical concept, and I realize that not everyone has the time to read a big essay, I’ve created this simplified explanation.

Just a warning: like any other 5 Minute Concepts piece, there’s always more to the story. I’m just trying to give you the most important parts in 5 minutes or less.

Anyway…

Let’s start with a stripped-down definition: the exploration-exploitation dilemma is the choice we all have to make between learning more or taking action with the knowledge we already possess.

Learning more is exploration, acting with current knowledge is exploitation.

With either action you’re trying to find some way to maximize what’s often referred to as “reward,” or some end-state that you find desirable.

The reason this is a dilemma is simple: you can’t explore or exploit exclusively and win in the long run.

If all you do is explore, you’ll never take action in the world — which means you get a predictable payoff of exactly zero. There isn’t much to be gained from passively gathering information until you die.

On the other hand, taking action without learning anything is also a long-term losing strategy. You do get some kind of reward by exploiting a known path, but that means you’re giving up any chance at a higher payoff that might be staring you in the face without you knowing about it.

The real kicker here is that exploring is what drives the value of exploitation, and vice versa. You need to explore in order to find good paths for exploitation, and you need to exploit in order to get a reward for your exploration. Both actions are dependent on each other.

What you’re balancing in either case is opportunity cost. You have a limited amount of resources, such as time and money, to work with over the course of your life. If you explore, you’re by default not exploiting, and vice versa.

Consider this example: Let’s say you’re scrolling through Netflix, looking for something to watch for the new couple of hours.

You notice that a movie you’ve seen a dozen times is one of the choices and consider watching it. Right next to that is a movie you’ve never seen before.

Choosing the movie you’ve seen provides a specific emotional payoff for you. You know all the best parts and you’re well aware of how the entire experience will make you feel.

Choosing the movie you’ve never seen means taking a certain amount of risk. There’s an unknown payoff for watching this new movie, and it might end up being a waste of two hours. Those two hours will be gone, never to return.

But you might also discover a new favorite movie, or genre, or director, that you never knew about.

It’s easy to get sucked into either extreme. I’ve known people who spent their whole lives reading, accumulating a veritable library worth of knowledge in their head, but never tried to do anything with it.

And, of course, I’m sure we both know people who have never read a book or stopped to think for even a moment about whether their beliefs and actions should be altered in some way.

While this is an unsolved problem (and trust me, many people have tried to figure it out), there are some good rules of thumb to run with. First of all, don’t favor a binary approach. Only exploring or only exploiting doesn’t work in the long run.

Secondly, it pays to spend a lot of time exploring early on and then shifting more and more to exploitation over time. But — this is critical — you never stop exploring completely. For a person in the real world, exploring should always be part of your strategy.

There’s always some accommodation made for learning new things. This is known as the epsilon-decreasing algorithm, and, if you just want a simple heuristic for managing this dilemma, it’s a pretty good place to start.

Third, there are always inflection points where it makes sense to shift from one to the other. Sometimes it’s a moment where you realize you’ve finally reached a level of knowledge that grants you a new level of competence and the time to utilize it has come. Passing a professional exam is a simple example of this.

Other times you might suffer a bitter defeat and receive an unfiltered signal that it’s time to explore. If a big project you’ve been working on fails, for example, you might need to go back to the drawing board and evaluate how to improve for your next attempt.

I could talk about this for hours, but in general I want you to understand this: figuring out how to spread your time between exploration and exploitation is perhaps the most important problem you’ll ever face.

Don’t push this into the background — be conscious and deliberate about it. Doing that might just change your life in ways you never saw coming.

Photographic Memory

Ace Eddleman

This is part of my 5 Minute Concepts series, which is designed to help you understand fundamental concepts about subjects like learning, memory and competition in the shortest time possible. Each episode is available in video format on my YouTube channel and audio via my podcast. If you prefer to read, the transcript is below.

Want to know when new content shows up? Sign up for my newsletter here.

Transcript:

Let’s talk about photographic memory, a topic that I’ve been asked about more times than I can count.

First, we need to get one thing out of the way: photographic memory is a myth. That’s right, nobody has ever been able to prove that they have a photographic memory.

The reasons for this are related to what I talked about in Why Your Brain is Lazy, namely that your brain is forced into making trade-offs because it uses so much energy all the time.

When your brain encounters a stimulus in the world, it makes a decision to either keep it (what’s called “encoding”) or get rid of it. This is based on whether your brain sees the stimulus in question as salient.

If it is salient, then the encoding process will probably kick off, and if it’s just an everyday, run-of-the-mill stimulus, then it won’t.

Your brain will always make this trade-off, and no amount of training can circumvent such a fundamental biological principle. It’s an evolved survival strategy designed to reduce the amount of energy that gets wasted during the memory formation process, and there’s no way to escape it.

The question, then, is: why do so many people believe that photographic memory is real?

A simplified answer is that popular media loves to use it as a short-hand for high levels of intelligence and most people don’t question that stereotype.

Elon Musk is often used as an example of the hyper-genius who possesses a photographic memory, but as far as I know that claim about his memory has never been tested.

The more complete answer is that there are people who do exhibit extraordinary memory abilities, and those abilities get mis-classified as photographic memory.

There are some people who could be called savants who have exhibited world-class memory abilities. Kim Peek, the inspiration for Dustin Hoffman’s character in Rainman, could absorb incredible amounts of information in one sitting. Stephen Wiltshire is a savant who can recreate skylines in incredible detail after seeing them once.

These savants do have great memories, but there are two important qualifiers to consider: 1) their memories are never good enough to qualify as “photographic” (Stephen Wiltshire’s pictures contain many mistakes, for example), and 2) the memory abilities they possess appear to come at a huge cost, as they’re not able to take care of basic everyday tasks.

So their increased capacity for memory is a trade-off that doesn’t appear to be beneficial to their survival, which says a lot about how evolved the standard memory algorithm is.

Some people also exhibit what’s called hyperthymesia, or superior autobiographical memory. This group of people have an uncanny ability to remember the minute details of their day-to-day lives.

These individuals appear to have some kind of focused memory algorithm that doesn’t envelop their overall memory abilities. In other words, their brain prioritizes a specific type of information for encoding, but that benefit doesn’t overlap with any other facet of their memory. 

One last example is the group of people who compete at memory competitions. Memory competitors do things like memorize entire decks of cards within a few minutes.

This is all accomplished with the use of what are called mnemonics, which are memory tricks that can be used to memorize (for short periods of time and with lots of practice) specific bits of information.

Nobody with a claimed photographic memory has ever won a world memory championship, which is hilarious since you’d think that’s where they’d show up. If you had a photographic memory, why not cash in on it?

Anyway, the general idea to take out of all this is that photographic memory doesn’t exist. You can improve your memory in specific ways with specific techniques, but overall you can’t get away from the fact that your memory automatically tosses most of what it encounters.

Why Your Brain is Lazy

Ace Eddleman

This is part of my 5 Minute Concepts series, which is designed to help you understand fundamental concepts about subjects like learning, memory and competition in the shortest time possible. Each episode is available in video format on my YouTube channel and audio via my podcast. If you prefer to read, the transcript is below.

Want to know when new content shows up? Sign up for my newsletter here.

Transcript:

Let’s talk about why your brain is lazy.

The short explanation is it’s lazy because it has to conserve energy as much as possible.

Energy in turn needs to be conserved because, even though your brain only represents about 2% of your total body mass, it burns through about 20% of your daily calories.

In other words, your brain’s an energy hog.

The reason it’s a hog is because it runs everything in your body, and keeping the human body running is, to put it lightly, a complex task.

Why is it so complex? Well, most of what you do is unconscious. Conscious thought only represents a small portion of the work your brain is doing.

For example, you don’t run your nervous system or maintain your internal organs with conscious thought. That’s all happening in the background, and all of it takes energy.

Your brain is never really turned “off” as a result, even when you think it is — it’s always busy managing something in your body.

This is why the whole “you only use 10% of your brain” myth is such a joke. Your brain is a constant hive of neuronal activity at all times, and it’s because the brain is busy running the whole system. This is true even when you’re asleep — so even if you think you’re idle and nothing is happening, your brain is busy running everything.

Anyway, because it’s so busy allocating resources all over your body, your brain has developed a long list of cognitive shortcuts as a means of saving energy.

Forgetting is the best example of a cognitive shortcut: your brain forgets most of what it encounters because it would be too energy-intensive to remember tons of useless data.

The brain is thus optimized to be, to borrow someone else’s terminology, a “change detector.”

Your brain’s lazy memory algorithm focuses on encoding new memories that are salient, and dropping everything that isn’t memorable.

For example, you’ll remember if, during your daily drive to work, a zebra steps in front of your car even though you live in a major metropolitan area. That’s such an unusual event that it’s guaranteed you’ll remember it — it’s so salient that your brain will encode it as a memory.

This is why repetition is important in learning: you need to tell your brain (through repeated exposures) that a given stimulus is worth the energy expenditures involved in remembering it.

Your brain does this in order to take note of environmental cues that could potentially influence your survival. 

We suck up anything that’s unusual because big changes in our environment can be dangerous. A zebra running around in the street could be indicative of a problem at the local zoo, which in turn could mean there are dangerous animals running around that could eat you.

With all of these lazy shortcuts, your brain is making a trade-off of some kind. The brain is asking itself: “Is it worth pouring resources into this?” And if the answer is “no” then the brain doesn’t prioritize itself around that stimulus.

People get frustrated with these shortcuts, but they’re an indicator of a healthy brain. There are exceptions, like Alzheimer’s or CTE, but in general shortcuts like forgetting mean your brain is acting in accordance with its lazy nature.

The key idea to pull from all of this is that your brain does what it does for a good reason, and you won’t be able to learn or use your memory well if you don’t understand these internal dynamics. 

Much of learning revolves around finding ways to use your brain’s built-in mechanisms for your benefit, but there will always be limitations. Don’t get frustrated, just accept that your brain will never be perfect.

To put this all in perspective: supercomputers that take up entire floors of industrial buildings can’t touch the capabilities of the human brain. 

How is it that such vast amounts of electrical and computing power come up short when trying to handle tasks, like language, that we find trivial? This is one of the enduring mysteries of the brain. But it should clue you in to the fact that your brain, despite its laziness, is not as flawed as you might think.

Managing the Exploration-Exploitation Dilemma

Ace Eddleman

Managing the Exploration-Exploitation Dilemma

At the core of every life is a single, difficult question: should I learn more, or should I make the most of what I already know? This is known as “the exploration-exploitation dilemma” (aka “the exploration-exploitation tradeoff“), and it’s the most important problem you’ll ever face.

[Read more…] about Managing the Exploration-Exploitation Dilemma
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Copyright 52 Aces & Ace Eddleman © 2021 · Log in

  • Start Here
  • Books
  • Courses
  • Newsletter
  • Writing
  • Reading List
  • About
  • Contact