Properly implementing these principles, robots will not be able to overthrow people

The authors of fiction and sci-fi films have repeatedly asked the question: Will robots become so intelligent one day that it might overthrow humans? A group of the world's best artificial intelligence (AI) experts have worked together to ensure that that overthrow will never happen.

They provide 20 principles to guide future AI research. These guidelines have been endorsed by hundreds of experts, including Stephen Hawking and SpaceX's CEO - Elon Musk.

"We hope that these principles will provide material for lively discussions. And these will also be ambitious goals for science to use AI effectively to improve children's lives. people in the coming years ", one of the scientists said.

Experts involved in drafting principles are doing many different jobs, from academics, engineers to representatives of technology companies, like Google's co-founder - Larry Page.

These principles have been published on Future Of Life Institute.

Here are 20 guidelines to follow when developing AI:

Research issues

Picture 1 of Properly implementing these principles, robots will not be able to overthrow people
AI can become extremely dangerous.(Photo: Columbia Pictures).

  1. Research Objectives: The goal of studying AI is to create oriented intelligence, in addition to useful intelligence.
  2. Research funding : Investing in AI should be guaranteed to serve useful applications in developing AI, by answering questions:
    1. How to improve the AI ​​system to be strong, so that they can do everything we want but don't have a problem?
    2. How to prevent possible risks to AI?
    3. What values ​​should be set in order to apply pressure to the AI?
  1. The link between science and policies : There should be constructive and straightforward exchanges between AI researchers and policy makers.
  2. Research culture: It is recommended to build a culture of cooperation, trust and cleanness among researchers and AI developers.
  3. Avoid racing : AI development teams should work together in a healthy way and avoid violating safety principles.

Morality and value

  1. Safety : The AI ​​system should be safe during operation. It should also be transformed flexibly when there are any changes.
  2. Transparency when there are errors : If an AI system is dangerous, be sure to know why.
  3. Transparency in adjudication : Authorities who have sufficient authority should provide a clear and clear explanation for any activity involving automated systems.
  4. Responsibilities : Those who design and develop high-end AI systems are responsible for the AI ​​systems they create.
  5. Associated values : AI systems should be designed so that their goals and actions link to the values ​​that people recognize.
  6. Human values : The AI ​​system should be designed and operated in a way that is compatible with human ideals of value, rights, freedom and cultural diversity.
  7. Personal security : Everyone should have the right to access, manage and control the data they operate, so that they can analyze and use their AI system.
  8. Freedom and individuality : AI applications are not allowed to cut people's freedom too much.
  9. Benefit sharing : AI technology should be profitable and empowered to as many people as possible.
  10. Sharing for prosperity : The prosperous economy created by AI should be shared abroad, benefiting all mankind.
  11. Control of human control : People should choose the appropriate way to delegate authority to the AI ​​system, so that it can accomplish the human goals that are already in place.
  12. No break : Control of higher AI systems should be respected and improved, rather than broken.
  13. AI army : Avoid building troops that use deadly automatic weapons.

Picture 2 of Properly implementing these principles, robots will not be able to overthrow people
The AI ​​system should be safe during operation.

Long-term problems

  1. Risks : The risks that the AI ​​system may cause should be carefully planned.
  2. Important : High-end AI systems can represent profound changes in Earth's existence history. It should be planned and managed in proportion to the attention and resources we have spent.