Philosophy for robots

Robot control technology is completely changing every aspect of life. But is there any limit for smart machines?

>>>Elon Musk expressed concern about artificial intelligence

From the Deep Blue computer, defeat the king of Garry Kasparov to unmanned drones that can accurately shoot ground targets, from self-driving cars to cruise missiles, robotics technology is changing absolutely every aspect of life. But is there any limit for smart machines?

People and collaborative robots (or cobot) today work together as true partners, on chess tables, in factories and on battlefields. The most modern cobot can operate comfortably in human workspace.

Many cobot no longer have to operate in the state of always watching as their predecessors in the past.

How smart can a robot be?

'For years, when we put a robot in a factory, people at the management level will say: That machine does this and that - Erik Nieves, chief technology officer of Yaskawa Motoman Robotics, has a pillar. The facility in Miamisburg, Ohio (USA), says - Now the managers must recognize them differently. They no longer only perform a separate job, but as the real workers of the chain '.

There is one hand or two hands, 6 axes or 7 axes, priced from US $ 22,000 to several million, the modern cobot is encroaching on every space of human activity. A good example is Baxter, cobot, who simulates the human nature of Rethink Robotics, almost a star in the robot world after launching in September 2012.

This Cobot has two seven-axis arms connected to a body and 'face' is a recognizable and reflective LCD screen that corresponds to the reflection of the opposite person. Weighing 75kg and can lift 2.3kg for each arm. Baxter has audio and visual sensors to detect people when they enter its space, and to distinguish between people and objects.

Picture 1 of Philosophy for robots
Baxter star robot (right) - (Photo: newventurist.com)

'Baxter is ideal for repetitive and skillless tasks, but requires a certain sense of being like a human being - Mitch Rosenberg, Rethink Robotics' vice president of marketing and product management in Boston. , Massachusetts, say - For example, check a part of the product to see if it is qualified and if it is, put it in a good group. On the contrary, put it in the bug product group '.

Another example is the lightweight robot LBR iiwa (intelligent industrial work assistant) of KUKA. Launched at the Hannover Messe exhibition in Germany in April 2013, this cobot was developed through an agreement between KUKA and the German Space Center (DLR) with the initial goal of getting it to work in outer space. The current model includes seven-axis arms, weighing 23kg.

In the pilot program, Cobot participated in producing more than 500,000 Daimler AG gearbox drives for its Mercedes-Benz brand at a factory in Stuttgart (Germany) in 2009.

But the overly intelligent intelligence of cobot immediately brings back the eternal question of making these machines: safe for humans. This is especially true when new generations of cobot that are not monitored are designed to be partners, working side by side, not being controlled by people.

The latest safety standards address human and cobot problems are ANSI / RIA R15.06-2012 or Requirements for industrial robot and robot systems; and ISO 10218: 2011 or safety requirements for industrial robots, robots and robotic devices, issued by the International Industrial Robot Association (RIA).

"These standards were designed with consideration first of all the cobs," said Pat Davison, RIA's director of standards in Ann Arbor, Michigan. "They make sure people and robots can do it." work together safely '.

Cobot today uses different technologies to ensure operational safety."If they come into contact with people or anything that causes a jet, it will stop immediately," said Edward Mullen, national sales manager for Universal Robots USA. LBR iiwa of KUKA also works on this principle.

Our "light robot has sensors in each of the seven shaft joints - KUKA's Michael Gerstenberger says - That means that at the end of each hand joint, cobot has a stop sensor. Even if the sensors are inactive or in the body, the control system is still designed to let them know that once a component fails, the entire system will stop working. '

Philosophy of robots

Prior to the official rules of robot safety for humans, three laws of robots, or three of Asimov's laws, are considered the standard for modern robots. Although only mentioned in the Runaround fiction short story by American author Isaac Asimov, these three laws have been created to create respectful robots until recently. Those laws are:

1. Robots cannot hurt people or 'watch' a human being in danger.

2. A robot must obey human commands, unless the order is contrary to law No. 1.

3. A robot must protect itself for its existence, unless it is in conflict with rule # 1 or number 2.

However, in fact, many robots cannot comply with those laws for many reasons. Some robots are too simple to understand that they are dangerous to humans and must stop. In fact, even the most modern robots today cannot fully meet those three rules.

In a guest editorial for the famous science magazine Science 'Ethics Robot' ( Robot Ethics ), author Robert J. Sawyer said that since the US military used many types of Different robots, typically unmanned drones, conducted countless bombardments in the Middle East, Asimov's laws were no longer meaningful.

Sawyer also believes that industries need to use robots such as cars or nuclear plants, because the demand of efficiency is less concerned with basic protection principles for people, especially when they are only has a philosophical meaning.

Later, famous fiction writers like David Langford created many new laws for robots, but also based on Asimov's basics.

Or in the July and August 2009 issue of IEEE Intelligent Systems, Robin Murphy, professor of computer science and technology at Texas A&M University, and David D. Woods, director of the Robot Systems Laboratory. In Ohio, proposing 'Three Laws of Responsible Robots' , roughly the same as Asimov's law, is just that Article 1 is replaced by: A robot must operate according to ethical and legal standards. The highest level of professionalism and safety.

In 2007, the Korean Government announced the 'Robot Ethics Charter' establishing further standards for users and robots builders. Park Hye Young of the Ministry of Information and Communications of Korea acknowledges that the standard comes from three basic rules of Asimov.

Another concern, is that in the past, but increasingly visible today, is human privacy. In The first circle fiction novel by Russian writer Aleksandr Solzhenitsyn in 1968, he described the speech recognition technology of robots that could be used to oppress people in dictatorship.

If an artificial program can understand human natural language, then in theory, it is enough to eavesdrop on every phone call and read every letter from the world, moving back to the filtered information. for secret agents and censors. That's exactly what happened at the US National Security Agency (NSA) in the Edward Snowden case.

Meanwhile, Joseph Weizenbaum, a German-American fictional scientist and novelist, argued in 1976 that modern technologies cannot replace people in positions that require respect and understanding like customer care, physiotherapy, psychology, nursing for the elderly, judges, police .

But so far, perhaps except the judge and the police, many of the roles that Weinzenbaum has denied for previous robots have come true.

The question is whether what sci-fi films like I, Robot or Surrogates about a future controlled human robot will collapse on mankind at some point is probably not a question that's too fabulous.