Consider the electric car. In the early 20th century, most cars on the road were electric, but gasoline-powered cars soon surpassed them in performance and range. Several attempts later in the century to commercialize electric cars failed to live up to expectations and were abandoned—no one wanted a slow car that took forever to charge and couldn’t go very far between charges. It wasn’t until recently that advances in batteries and other components made electric cars a reasonable alternative to gasoline-powered automobiles.
Artificial intelligence (AI) has gone through several boom-and-bust cycles over the years. The cycle varies in the details, but generally goes something like this:
AI researchers have coined a term for that last step: “AI winter.” During this period, few advances are made because the funding flow has dwindled to a trickle. But inevitably, someone will come up with a new idea. Or, computing hardware advances to a state that makes one of the old ideas workable and the cycle starts over again. This has happened several times since the idea of AI was first promoted back in the mid-20th century.
The last few years have seen some extraordinary success in AI research and commercialization. Most of these advances are in the area of machine learning, and in particular, the technique of deep learning. Companies large and small have great ideas and, perhaps more importantly, working prototypes that capture the attention of investors. Has AI finally reached the point where it can break out of the boom-and-bust cycle and see real, sustained success in the marketplace? Have we seen the last AI winter?
The answer, as with all questions about the future, is “it depends.” And what it depends on is the ability of the AI community to manage the message and not let the hype get too far ahead of the reality. We are at a point in the cycle where serious money is being thrown at AI, and the next couple of years will determine whether we head back to another AI winter.
The trouble is that even with the recent leaps AI research, the technology is still quite limited in what it can do, compared with what our collective imagination believes it should be able to do—namely, to act like a human, only better. Amazon’s Alexa (and other similar services) is very good at what it does: Understand human speech and carry out simple instructions. But Alexa doesn’t intuit, doesn’t plan, doesn’t make or “get” jokes, doesn’t deal well with ambiguity or incomplete information, and has no feelings. As a human-computer interface, Alexa is much closer to the old MS-DOS “C:\>” prompt than it is to C-3PO. As long as being human-like is the standard against which AI is measured, we may see cycles of AI renaissance and AI winter repeat themselves for many decades to come.
What can we do to prevent another AI winter? Here are some ideas:
A truly human-like AI is not in our near future. If society focuses on the problems AI can solve now, we might just leave AI winter behind us.