Robots are tightly connected to our society. The word robot itself was introduced in Karel Čapek’s seminal play R.U.R. to raise societal issues. This topic was addressed several time in Sci Fi books and movies. So far, the actual impact of robotics on our daily life is almost non-existent except for workers in some big manufacturing companies such as car makers. In those niches, robotic production lines took over humans. Still, the consequences on the job market in western countries were much limited compared to the last two decades of outsourcing to Asia. But, the times are changing.
Robots are entering every sector of our life and economy. First, came vacuum cleaners pioneered by iRobot Roomba (7+ million units sold). Now, robotic companies and startups explore many other applications. There are plenty of competitors for tele-presence robots. Mining robots are on the rise. Mobile robotics finds it way to agriculture. Cheap and safe industrial robots (think Baxter from Rethink Robotics) enter the SME world to perform manipulation tasks side by side with humans.
Current robots do only some limited tasks. But, they are gaining autonomy at a faster pace than what most people imagine. Take the car example. Up to some level, our dumb vehicles already act autonomously with Anti-lock Braking System (ABS), the Electronic Stability Program (ESP aka ESC), the cruise control and the airbags to name a few. The self driving car is for tomorrow. Google, Audi, Volvo, General Motors, Volkswagen and other automotive manufacturers are already testing driverless cars in real traffic conditions (see Video 1). Remember that in 2004, all competitors to the DARPA challenge failed to simply go from one place to another across the desert.
Video 1: Google Self-Driving Car Test
Other areas of robotics are likely to evolve as fast. This means that sooner or later, robots will out-compete humans in most activities, including intellectual ones. Consequences of this evolution for our society are discussed by Professor Vardi (Rice University) in his article ‘The Consequences of Machine Intelligence’ and his interview by Steven Cherry entitled ‘What will we do when machines do all the work?’. According to Vardi, we, as a species, need to discuss about where we are heading in our research on intelligent machines. This echoes what Bill Joy (co-founder of Sun Microsystems) said at The first Foresight Conference on Nanotechnology held in 1989: ‘We can’t simply do our science and not worry about these ethical issues’. Though, Bill Joy went much further by advocating that we should stop acquiring knowledge that might have undesirable outcome.
These conclusions are shared by other folks that point ethical questions related to various technologies and science. This was exactly the topic of this year’s Edge question: ‘What should we be worried about?’. And here too, robots were pointed in some responses as one potential threat for humanity.
Although many people seem to agree that robotics may have terrible consequences, we should first state that robots are just a tool. They are intrinsically neither good, nor bad. The good or the worse are instead the result of how we use such a powerful technology. The question is actually about our society and how we want to shape it.
Another observation is that it is almost impossible to decide which technology, practice, or knowledge is really good or bad. The world is a complex net of relationships, that makes it impossible to foresee the consequences of our decisions. Remeber: “It’s tough to make predictions, especially about the future” [Yogi Berra]. Consider vaccines. They saved the life of so many people. Everyone agrees that this is a great invention, and want more vaccines for all diseases. But, this medical progress contributes to having 7+ billions humans on earth, which means new problems such as pollution and the global warming.
But, even if we could accurately state that exploring some scientific field will ruin our life or even the whole planet, can we really stop everybody from studying it or at least using it? There will always be some people that will find some short term benefit for doing it. An example is doping in sports. It is prohibited by strong international laws. And still, there are some scientists and medical doctors that explore new drugs. Even some athletes trade the potential risks and negative effects for their bodies, for the sake of winning. Another example is the case of nuclear weapons. Recent History taught us the disastrous outcome of this technology. Yet, many countries starting with the US and Russia maintain stocks of nuclear bombs. And for sure they continue to develop in secrete new generations, while other countries are acquiring the technology.
Does all this means that the future is the end of the human species? Not so sure. If we can’t predict the times to come, we still can explore potential futures using Sci Fi. It tells us that the terminator scenario is only one possibility. On the opposite side there are Asimov robots that obey laws to serve not only humans, but also the humanity as whole (the zeroth law). A robot programmed with the goal to help humans will help us make better decisions for a better world.
One thing is sure: there will be a change. Instead of trying to stop science and technology, we should instead embrace the change and prepare for it. Laws and incentives is one way of doing it. But, this is not enough. Virtue and morality are of tremendous importance as advocated by Barry Schwartz (see Video 2).
Video 2: Barry Schwartz TED Talk on “Our loss of wisdom”
Education is another strong lever. By making people understand the technology, they will be empowered to find out the best way to use it. The good news is that we are already on this path. The Internet, is at this regard one of the most important tools that we have at hand. There are plenty of sites, and initiatives that provide high quality free of charge content. Wikipedia is at the top of list. Another great source is TED. It gives a chance to millions to watch thoughtful talks such as the one below about “the danger of science denial”.
Video 3: Michael Specter TED Talk on “The danger of science denial”