Is Your Job in Danger?
The advent of machine learning means that jobs which are traditionally viewed as impossible to automate are now in danger of being automated. Some jobs are going to be exclusively by robots, others will rely less (in some cases, much less) on human input.
Before we go into specific examples, we need to define a framework for spotting occupations that are likely to get automated. Knowing how to do this will help you avoid going into careers without good long-term prospects or, alternatively, start the process of exiting a soon-to-be-automated career so you can stay ahead of the curve.
Here's the number one question for figuring out if you're going to be automated: "Is there a way to gather a significant amount of data about what I do on a daily basis?" If the answer is "No," then there's a good chance that you won't be replaced by a robot anytime soon.
But if the answer to that question is "Yes," then you need to immediately follow that up with another question: "Is that data already being gathered?" While a "No" is better than a "Yes" here, it doesn't mean you're safe. It means you're safe for the moment, but chances are good that it will only be a matter of time before the answer changes to "Yes."
In case you aren't sure about whether data is being (or even can be) collected, think about how many machines and/or pieces of software you interact with on a daily basis. This is important, because jobs that involve lots of tech are jobs where data can be easily collected. An easy way to think of this is to say that jobs which are heavily instrumented are easier to automate.
Another question you should ask yourself is: "How predictable is my work environment?" If your work is repetitive and the environment rarely changes, chances are good that you'll get replaced by an algorithm. This is a straightforward assessment to make because repeating predictable tasks is what computers have always been good at. In these cases, ML is rarely required and you'll probably be out of a job sooner rather than later.
A related question is: "How ambiguous is my work?" Even if there are lots of data points available, a job can be very difficult to automate when there's a lot of human-induced ambiguity involved. These types of jobs are usually heavy on human-to-human interaction, and tend to revolve around systems that are not easy to define.
What Can Be Automated
As we discussed in the last chapter, data is the key to automation. Once there are large datasets related to how people do their job, businesses will look for ways to introduce automation. So if you answered "Yes" to both questions, then there's a very good chance that you're going to lose your job.
Understanding how and what data is being gathered is critical to understanding an industry's automation potential. Let's walk through some examples so you can understand what I'm talking about.
Trucker: Definitely Going to Be Automated
Trucking is a multi-billion dollar industry that relies heavily on human labor to make money. Companies need to move physical goods across land, and trucking does that by putting a human being behind the wheel of a large, loud truck that will get those goods to their destination.
Unfortunately for those companies, trucking has to contend with the fact that people aren't that great at driving. Drivers make navigational errors, don't get enough sleep, take drugs or alcohol, and get into crashes. Even if a driver does everything right, they have to contend with other people driving their vehicles who might present problems for them.
Modern technology has reduced the frequency of some of these problems, but the human element continues to be the problem. Fortunately for trucking companies (and unfortunately for truckers), it's now possible to gather substantial amounts of data about driving that can be used to automate the task.
Driving a truck is perhaps the definition of an instrumented job, since the entire role of the human revolves around interacting with a machine. While roads are a somewhat unpredictable environment, the technology for self-driving vehicles is already so well-refined that much of the risk has been taken out of the picture.
Companies like Otto are already at work on this, and it's fair to assume that there will continue to be a large amount of money and engineering effort thrown at getting people out of the driver's seat.
Fast Food Worker: Definitely Going to Be Automated
Fast food is a ripe target for automation because it's an assembly line for food. Getting food prepared and delivered to the customer is a well-defined, predictable, and repetitive process. It looks like this:
- Customer orders food, in this case we'll say he orders a burger with fries and a drink.
- Employee A assembles a burger using a pre-determined recipe (grab buns, add 1 tbsp of ketchup, etc.).
- Employee B drops cooks fries using a pre-determined recipe (drop fries into oil, cook for 30 seconds, add 1 tbsp of salt, etc.)
- Employee A and B drop their finished products in a paper bag, which is grabbed by Employee C.
- Employee C fills a cup (of a pre-determined size) with Pepsi/Coca-Cola/whatever, then hands the cup and bag to customer.
This is a simplification of the process, but not by much. Each step of that algorithm could be done better, faster, cheaper, and with more precision than a human. As such, it's clear that fast food will eventually be a domain dominated by machines.
Software Engineer: Degraded by Automation
I've met quite a few people who are confident that writing code will not be automated in the future. Not conincidentally, nearly all of those people were either already programmers or learning the trade. As a former programmer myself, I disagree.
The reason I say this is simple: there's a boatload of data out there about how programs are built (just look at Github if you don't believe me), and most modern programs tend to use the same kinds of components (UI, database, encryption, etc.). There's even a term in programming for how engineers are expected to structure their code to deal with specific types of problems: design pattern.
Much of the economic opportunity in programming comes from building systems that take data from one place, transform it into something else, and then deliver it to end-users. This process is called Extract, Transform, Load (or ETL for short).
In theory, an ML system could be built that simply looks for patterns in open source code repos and then outputs highly optimized programs that can solve common problems. This could be particularly true for ETL problems, since designing those projects is about defining where your data comes from, what you want to do with it, and how you want to deliver the end product. With enough examples, it seems unlikely that an ML system couldn't take the place of the army of programmers now doing that work.
However, there will still be room for some human programmers. At the top of that pyramid are going to be the people who built the ML systems, who are highly skilled, experienced professionals. They'll be the only ones who understand how to build and maintain such high-value systems, and their salaries will be high.
You can already see this dynamic in action by looking at the offerings from companies like Algorithmia. Their platform is designed as a marketplace that allows businesses to "borrow" algorithms (including ML algorithms) and use them for a fee, saving them the cost of hiring an expensive programmer to build a piece of functionality. Even though there are programmers making the algorithms sold on the platform, they're essentially replacing legions of other programmers by offering their work in this way.
The other side of the spectrum is not so pretty. For programmers who don't have much skill (or at least aren't skilled enough relative to the people building the ML systems), their time will most likely be spent customizing what the ML systems spit out. That work will not require massive amounts of brain power, which means it is unlikely to be highly paid.
This is all speculative, of course, but I think it's safe to say that programmers (outside of the elite-level coders) are in for a bumpy road ahead.
Lawyer: Probably Won't Be Automated
Here's where things start to favor humans again. Attorneys have a job that's difficult to gather data about and involves all kinds of ambiguity. This is because their job revolves around understanding archaic, complex legal systems and persuasion. ML software can do things like sort through court rulings to find specific data, automate certain paperwork-related processes and rate lawyers based on their performance (using metrics like number of guilty verdicts), but capturing the value of an attorney on the whole will be hard.
Lawyers have to consult with clients, come up with a story based on incomplete or distorted evidence, and then persuade other human beings to believe that story. These are not easily defined, repetitive processes. Each case is going to present new challenges, and the data that can be collected for each one is not necessarily going to be helpful for future cases.
It is reasonable to assume that the repetitive grunt work done by paralegals will get automated away, but the attorneys themselves are going to be hard to replace. Their work is simply too ambiguous and too human-oriented.
Police Officer: Probably Won't Be Automated
Like laywers, police officers have a job that's incredibly difficult to automate. This is because their work revolves around complex, ambiguous interactions between people. Couple that with an equally complex, ambiguous legal system which has to be followed and you'll quickly see how difficult it would be to build robot cops.
Consider how hard it would be to automate the most controversial (and yet central) component of the job: the use of force. Police officers are unique in that they have to make decisions–usually very quickly–about whether it's acceptable to pull out their guns and shoot other human beings. Their decisions are sometimes wrong (for a variety of reasons) and innocent people die.
Even though there are certainly cases where the officer is acting in bad faith, much of the time these wrongful shootings are due to the need to make split-second decisions about ambiguous situations. Each situation is unique, and it would be incredibly difficult to build a computer system (even an ML system) that could accurately detect all the nuances needed to make a "good" decision to shoot.
Officers have been, and will continue to be, augmented with technology (such as cameras on their cars and uniforms), but the core of their job will be difficult to automate for the foreseeable future.
Now that we've gone through some examples, let's recap by reviewing a list of features you should look for when evaluating a profession:
- Is there a way to gather a significant amount of data about what you do on a daily basis?
- If yes, is that data already being gathered?
- Are there clear patterns in your work?
- How ambiguous is your work?
If you really want to short-circuit this process, you could start by asking yourself, "Would automating this require AGI?" A job that requires artificial general intelligence (again, this is AI that is at the same level of intelligence as humans) is not a job that will be automated any time soon.
If you keep those in mind, you should be able to steer clear of careers that don't have much time left–or start to exit a doomed career you're already in.