Thousands of scientists signed a commitment not to develop AI killer robots
Google co-founder DeepMind and SpaceX's CEO are just two of the 2,400 participating individuals signing an agreement pledging to prevent the emergence of deadly automation weapons.
According to TheGuardian, thousands of specialized scientists have announced they will not participate in developing or manufacturing robots that can identify and attack people without the supervision of the operator.
Google DeepMind co-founder, Demis Hassabis, and CEO SpaceX, Elon Musk, are two of more than 2,400 individuals who signed an agreement pledging to stop military companies and nations from developing the Automated deadly weapon system - called Laws .
Contracting individuals also pledge to "opt out and support the development, production, trading or use of automated weapons of death" .
This is the latest move from scientists and organizations concerned about AI robots to highlight the dangers of giving the right to dispose of life and death to the hands of advanced AI machines. Committed Laws followed previous calls on a ban on technology that scientists believe could lead to the creation of new generation weapons of mass destruction.
Coordinated by the Institute of Life Futures - a Boston-based organization, this commitment calls on the government to agree with rules, rules and regulations to set a bad example and put the development of killer robots into action. out of law. Contracting individuals also pledge to "opt out and support the development, production, trading or use of automated weapons of death" . More than 150 AI companies and organizations have put their names on this commitment, and will be announced today at the International Link Conference and AI in Stockholm, Sweden.
Yoshua Bengio, a pioneer AI at the Montreal Institute of Mathematics Institute, said if this commitment could be "stigmatized" companies and military organizations are developing automation and public opinion weapons. with these companies and organizations may reverse. " This approach has been successful in the past to stop the development of landmines buried under the ground, thanks to international treaties and forms of public humiliation, although large countries like the United States do not sign treaties. "Prohibit landmines. US companies have had to stop developing mines," he said. Bengio has signed this commitment to express "its deep concerns about automated deadly weapons".
More sophisticated weapons systems are currently being developed.
The army is one of the largest sponsors, and also seeks to apply AI technology to serve many different purposes. With advanced computer systems, robots can perform enemy air flight missions, ground navigation, and deep-sea exploration. More sophisticated weapons systems are currently being developed. On Monday, defense secretary Gavin Williamson revealed a £ 2 billion plan to produce a new generation RAF fighter named Tempest , with the ability to fly without pilots.
The British ministers have stressed that their country does not develop automated weapons deadly systems, and military forces will always control the weapons they have deployed. But scientists warn that rapid advances in the field of AI and other areas show that human beings are now able to develop sophisticated weapons that can identify, track and fire. Human goals without approval from the operator. For many researchers, giving the right to decide who lives and who will die for the machines is an act beyond the moral line.
"We need to turn it into international rules, that automation weapons are unacceptable. A human being must always be in a supervisory position" - Toby Walsh, professor of AI at New University South Wales, Sydney, signed a commitment, said.
"We can't stop a person determined to develop automated weapons, just as we can't stop them from developing chemical weapons" - he added - "But if we don't want nations rogue or terrorist forces that are easily accessible to automated weapons, we must ensure they are not sold widely by weapons companies ".
Scientists may choose not to work in automation weapons development programs.
Scientists may choose not to work in automation weapons development programs, but they cannot control what other individuals and organizations can do with their scientific breakthroughs. published. Lucy Suchman, another signing participant, is a professor of anthropology at Lancaster University's Department of Science and Technology, said even when researchers cannot fully control others' use. Their achievements, they can intervene when needed.
"If I were a machine vision researcher who signed a commitment, I would first promise to monitor how my technology is being applied, and to object to those applications if they relate to "Identify automatic goals, and second, refuse to participate in advice and directly help integrate those technologies into an automated weapon system" - she said.
- Scientists Develop Ebola Virus Robots
- Sony continues to develop robots after 12 years of interruption, starting with pet robots
- He: Called against assassin robots
- The mayor of 43 countries pledged to cut greenhouse gas emissions
- The United Nations discusses killer robots
- There is only one way to prevent smart robots from slaughtering people?
- He developed spy fly robots
- Korean University was ostracized for studying killer robots
- Research the bee brain to develop self-flying robots
- Fearful prospect of robot attack automatically
- VND 100 billion to develop marine conservation zones
- Help robots 'understand' people