You Might Already Be Screwed

Before going any further, it's important to note that we're going to be running on the assumption that artificial general intelligence (AI that possesses human-level intelligence) is not going to show up in the near future. The reason for this is two-fold:

  1. It's not clear how long it might be until AGI shows up, but it looks like it's a long way off.
  2. If and when AGI is created, it's not unreasonable to assume that every human job is going to be in danger.

In other words, it's a highly speculative technology that, if it were invented, would make this book useless because nobody would be able to compete with the machines. If you want to know more about why I believe this, I recommend starting with the Wikipedia article for the "intelligence explosion" concept.

The core idea is that an AGI would be intelligent enough to improve itself, and the speed advantages conferred by computer hardware would make that process so rapid that a human-level machine would quickly become far superior to us.

If AGI can be created and it does lead to an intelligence explosion, we're all screwed - at least in terms of work. Why would a company ever hire people to produce goods and services again? Would the machines even be willing to take on the tasks we give them? There aren't any clear answers to these questions yet.

The future of AI is a hotly debated topic, and I'd prefer not to get bogged down in predicting when such a powerful technology might show up. We have no idea what the true impact of this technology would be if it were to come about, which is why it's such a point of interest for both technologists and philosophers.

With all this in mind, I've decided to focus on sub-AGI automation technology, specifically machine learning. It's already making a big splash in how we work and live and we can make more rational decisions if we focus on technology that we can already see being deployed in the real world.

Worrying about Terminators running the world without humans isn't helpful to what I'm trying to accomplish. We don't know if it will happen (or even if it could happen), and if it did unfold that way then we'd be screwed regardless. With this in mind, let's move forward on the assumption that machine learning is going to be the dominant automation framework for the foreseeable future.

results matching ""

    No results matching ""