Isaac Asimov's 1st Law Test

Isaac Asimov, the famous science fiction writer, was the one who set out "3 rules for robots" , in which the first thing stipulates: "Robots must not harm people, or leave them to humans." victim". Hearing this is extremely simple, but a recent experiment has shown that, to obey this law turns out to be extremely complicated, robots will also have to "brainstorm" and even wonder beforehand. Its decision before the situation.

Picture 1 of Isaac Asimov's 1st Law Test

Robot scientist Alan Winfield of Bristol's robotics institute in the UK has recently released a small experiment, to investigate whether robots can implement the first rule set. His group uses a small table, on which there are 3 robots, with 2 children playing the role of humans. The problem comes when both "humans" intend to commit suicide by stabbing themselves into the pit, and what the other robot will do, because it is clear that the law says it is not allowed to wear humans are harmed, that is, they die without saving.

First, the robot passes the test when only one human object appears and deliberately proceeds to reach the pit, the robot comes out and stops "that person" again. When two "human" objects appeared and attempted to commit suicide, the robot began to be confused because it did not know who to save first, sometimes it tried to save both at the same time but failed to do so. . After the test, there were 14/33 attempts, the robot took too much time to decide who to save, so it lost the opportunity to cause both "humans" to fall into the hole.

This test is said to be very important in the robot development industry, as well as the self-driving car industry. For example, in the event that a car is running while someone intentionally blocks the vehicle to commit suicide, how will the vehicle react to both ensure safety for passengers sitting in the vehicle and the other object.

In 1942, Isaac Asimov introduced the Runaround science fiction short story about robots, including three rules that laid the foundation for the development of robots later:

Rule 1: The robot is not allowed to harm people. Or let people harm. (A robot may not injure a human being or, through inaction, allow a human being to come to harm.)

Rule 2: Robots must obey human orders unless they conflict with Rule No. 1. (A robot must obey the given orders to it by human beings, except where such orders will conflict with the First Law .)

Rule 3: The robot must protect itself, as long as that defense does not conflict with Rule No. 1 or Rule 2. (A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.)