A common theme in science fiction is that of robots who are, or somehow become, intelligent enough to have opinions on the way humans are running things; invariably, the opinion is that they don’t much care for it, and they decide as a group to take action in the form of the violent overthrow of their human masters.
Happily (or unhappily, depending on your perspective), we are nowhere near this apocalyptic scenario. But we have brought robots of various kinds to a point where we can rely on them for a wide variety of tasks—in some cases, so much so that the robots just get in each other’s way.
What Not to Do: The Curious Case of Tesla Motors
Tesla Motors, the forward-thinking Silicon Valley electric car company run by serial entrepreneur Elon Musk, recently brought this fact in to sharp relief. Musk himself admitted that automation issues slowed production of the mid-priced Model 3 sedan to well below the initial promise of 5,000 cars per week. The problem was applying “too much automation”—that is, too many factory robots—to the production process. He also noted that “humans are underrated” -- his way of saying that more humans on the assembly line could have prevented many of the production issues.
It’s easy to see why Tesla would want to automate the Model 3’s production as much as possible: cost reduction. Tesla’s other cars, the Model S sedan and the Model X SUV, are expensive cars that few people can afford, so low production rates are not a major issue. To mass-produce a high-quality car at a price point that can actually gain Tesla some market share, they need to reduce their costs as much as possible, and one time-honored way to do that is with automation. Fewer humans in the process, the thinking goes, means fewer mistakes, faster production, and more consistent quality in the end product.
So what went wrong?
Process First, Then Automation
Musk has revealed few technical details on the Model 3 production issues, but we can make some guesses.
We don’t know who designed, built, or programmed the robots and other automation devices, but chances are good they were not fundamentally flawed in some way. More likely, the problem was in getting different robots and systems to work together. Systems integration on this scale is an extraordinarily complex task, and failure is almost guaranteed if it is done under intense time pressure, without a chance for thorough testing. Musk hinted as much when he indicated that automating the Model 3 production line should have been done in phases, rather than all at once.
A better approach might have looked something like this:
- Establish a well-designed process first. This should be the easy part, because humans have been mass-producing cars for over 100 years, and we’ve gotten pretty good at it.
- Determine which parts of the process are most easily automated, and automate them first. The less integration (read: complexity) at this point, the better.
- Get the process running, find out where the kinks are, and get them worked out before applying any more automation. A repeated cycle of continuously evaluating and improving the process, and strategically applying automation where there is a clear advantage in doing so, stands a much higher chance of success than automating everything all at once. Above all, don’t automate a part of the process that you shouldn’t even be doing in the first place.
The Robots Aren’t the Problem
So the problem isn’t that robots aren’t up to the task of making cars. They are. But throwing a bunch of robots together and expecting everything to go smoothly out of the gate is a recipe for failure. You have to understand the process before you can automate it. That’s why you need humans in the mix, at least at first, because they understand processes better than any robot. And the humans who are actually doing the work—that is, the assemblers on the line—understand the process, and where it suffers from inefficiencies, redundancies, and gaps, better than anyone in the corporate offices.
Over time, robots will become even better at manufacturing things, and as standards for interoperability are developed it will become easier to get them to work together. But even then, they won’t be smart enough to tell you when you have a poorly designed process. Only humans can do that, and humans will have a monopoly on that ability for the foreseeable future. We will continue to be the masters of the robots, and not the other way around.