The robot knows how to reject human commands

Obviously, no one wants to be a victim of robots after someone orders them to kill them.

Robots are taught how to reject human commands

Until now, robots have always been created to serve and support people, they are always programmed to comply with all our commands at any cost. However, it is not always reasonable for robots to listen to human commands, such as assassin robots.

Therefore, computer science professionals had to "squeeze their foreheads" to find a way to instruct robots to say "no" to human commands in certain cases. So far, they have achieved certain results. Researchers at the Tufts University (Massachusetts, United States) robot and human interaction reactor lab have taken a long step in teaching inferior machines to deny people.

Picture 1 of The robot knows how to reject human commands

First of all, scientists explain when people will make requests from another person. They point out that when people receive a request, they are ordered to do something, people often use a ruler to determine whether the request makes them feel happy, happy or not. But how can robots teach them how to feel? An idea that is mentioned is "converting" human "standards of happiness" to evaluation criteria based on the knowledge and calculation ability of robots. Specifically, instead of teaching "feel good" robots, scientists teach them to assess the actual situation before making a decision on whether to execute the order or not. They outlined the evaluation criteria that the robot would take as a measure of " the level of self-happiness" as follows:

  1. Knowledge : Do I know how to do that?
  2. Possibility : Do I have the physical condition to do that? (ie the robot will calculate the performance of its machine parts itself).
  3. Priority : Do I need to do it now?
  4. Roles and Responsibilities: My role in society will be assessed through implementing that duty?
  5. Behavior standard : Does this command cause me to violate the robot behavioral standards for society?

If the first three criteria belong to self-explanatory ability, the fourth criterion will guide the robot to evaluate the authority of the commander, while the fifth criterion will guide the robot to cause this behavior to cause harm to people or not. These criteria are not only intended to teach robots to reject any orders but also help them to give reasons for such refusal.

Follow an example below: A robot is required to step forward of the edge of the table but after analyzing the situation, if it does so, it will fall off the edge of the table, only after the person With the condition that it will prevent it from falling, the robot will execute the order.


The robot refused to walk towards the edge of the table.

Another experiment takes place when humans ask the robot to walk straight in the direction of an obstacle, of course, the robot will reject it because it detects something blocking the path. When asked to turn off the sensor detection system, the robot refused again because it confirmed that the commander was not competent enough to require it to disable this system.


People are determined to be incompetent to require the robot to disable its function.

When someone has the authority to ask the robot to do everything, it will look like the video below when the robot goes through the plastic cup after the person convinces it that the obstacles are not impossible to pass through. .


When people have the authority to order, the robot will do everything.

Experts at Tufts University have stated that in the present or near future, robots that only listen to human commands are a good and necessary thing, but when these machines grow to some extent. it is perfectly reasonable for them to know how to assess our requirements and to consider whether or not to perform them.