Quick note: this is the final segment to this series. If you haven't seen all the posts click here to start with the first one. There will be a round-up/reflection post up on MIT's blog soon. I'll keep ya updated.
At AndPlus, we’re pretty excited about recent advances in artificial intelligence, machine learning, and robotics. That’s hardly surprising, considering that we’re a pretty nerdy bunch that digs that sort of thing.... and it's literally our job. Of course, as software engineers, we expect to be designing, developing, and using these technologies in new and creative ways.
But as I’ve learned in my courses at MIT, the tech industry isn’t only about technology, or even only about business. Technology—especially revolutionary, disruptive technology, such as machine learning and robotics—is about people, society, and our relationship with the advancements in innovation that become part of our day-to-day lives.
Automation and Society
How does society deal with automation? This isn’t a hard question, because we’ve been dealing with automation for the last 200+ years. The short answer: Society adapts and becomes richer as a whole. Sounds like an easy cop-out answer from someone who has dedicated his career to building said technology, right? Yes, there are disruptions: Wide adoption of the automobile (made possible in large part by Henry Ford’s manufacturing process improvements) rendered buggy-whip making and horse poop-scooping obsolete professions. But it didn’t happen overnight, and although I wasn’t there and can’t say for certain, I doubt anyone complained about no longer having to pick up and dispose of tons of horse poop (which was, in fact, a major problem in larger cities. [and still is in some places but that's another story]).
So despite recent dire warnings about hyper-intelligent robots coming to take all the manufacturing jobs (or service jobs, or transportation jobs, or all jobs in general—the scope of the predicted apocalypse varies from one pundit to the next), I believe that the growth of so-called “intelligent” robots will not eliminate the need for human labor. Certainly, technology will transform human labor, like it always has, but we won’t have mass unemployment brought about by robotics.
Intelligent Robots? Nah.
Two quick reasons why I believe society will easily adapt:
- Despite the extraordinary advances in the technology in recent years, the adoption of machine-learning and robots is still not at all widespread. Business has been slow to invest in these technologies; some are wary of unproven technology, and some are simply not prepared to make such a dramatic change to their operations. In fact, we've found that a lot of actual software engineers themselves are reluctant to delve into this emerging tech.
- Also despite the technological advances, especially in machine learning, “intelligent” machines are still pretty dumb. It takes a long time, and a lot of data, to teach one anything, and when you’re done, it can do only that one thing you taught it. It can do that thing very well, perhaps better than a human, but the resulting skill set is extremely limited.
Take Pepper, a popular humanoid robot manufactured by SoftBank Robotics of Japan. Pepper was designed to be able to read human’s emotions, thereby making it useful for applications such as retail associate or personal assistant. At present, however, Pepper can’t solve complex problems, intuit anything, or even come up with a joke. Pepper is kind of an idiot... I'm actually somewhat worried that there will be a machine learning algorithm scraping all mentions of Pepper and her spawn will seek revenge at a later time but it's a risk I'm willing to take.
Other Effects on Society
A more interesting question, in my mind, is how we as humans will integrate robots and other machine-learning systems into the fabric of our lives. Sociologists and roboticists are only just beginning to explore topics such as these:
- How “human-like” does a robot have to be before a human will form an emotional attachment to it? Does it, perhaps, not have to be human-like at all, but simply exhibit certain traits we would like to see in the humans around us? We've seen an odd attachment (myself included) to 'robots' like spacecraft. I legit teared up when the Cassini satellite died after 2 decades at Saturn sending me new images every day. It doesn't seem to take much to gain an attachment to an inanimate object. NASA and ESA have seen this attachment and have begun anthropomorphizing (I have no idea if I spelled that correctly and I'm too close to the end of the blog to care) other spacecraft such as Dawn and the Rosetta probe.
- Are there cultural or other ethnographic differences in how people interact with robots? If so, how should these differences inform robot design?
- To what extent should robot design incorporate Isaac Asimov’s “three laws of robotics”? Are there alternative design principles that would work just as well or better? Should it depend on the application?
- Do robots have “rights”? Should they? I'm not touching that one with an opinion, I know better.
These and other areas of research will not only teach us how best to design and deploy robots, but will also give us more insights about ourselves, about what it means to be human. Finding the answers may become the greatest contribution of robotics and artificial intelligence to society at large.