Google's AI was able to design its own coding that people could not understand

Not only do people dream, draw or play chess to win people, but now Google's artificial intelligence can also establish a protection coding mechanism without any human intervention. Although AI has not yet been taught "encryption algorithms" but by using Machine Learning technology, they can still develop their own security mechanisms. Currently their algorithms are still quite basic compared to human proficiency, but Google calls this a big step forward for AI technology.

In the experiment, two Google researchers Martín Abadi and David G. Andersen created three artificial neural networks named Alice, Bob and Eve respectively . Later, the experiment was set to Alice send Bob a message and encrypt the information in the letter, Eve tried to decode it. Each AI in turn "sends mail" and builds security methods for the content within it. To ensure the confidentiality of the content, Alice transformed the words into an obscure paragraph so that anyone, including Eve, could not understand. Only Bob can guess the subtleties that Alice sends, and both have pre-defined some of the suggested keys, while Eve has no access.

Picture 1 of Google's AI was able to design its own coding that people could not understand
Google's AI can set up its own encryption mechanism.

Basically, the ability to secure information in messages of artificial neural network is quite poor. However, when practiced many times, the researchers found that Alice was gradually developing her own coding strategy and Bob also formed effective decoding. The team said that in the experiment, they repeated this experiment about 15,000 times and Bob could turn all the obscure texts of Alice into meaningful text. Meanwhile Eve also "learned" how to decode and it can guess 8 out of 16 bits forming the message. However, since each bit has only 2 values ​​0 or 1, this ratio is still random.

The team says that they still don't understand exactly what the coding technique is. The reason is that machine learning is capable of providing solutions, but it is not easy to understand how it offers it. Therefore, it is still not possible to ensure safety when applying the encryption algorithm created by AI. In other words, the practical applications of this technology are still quite limited. Joe Sturonas, PKWARE encryption director, said: "Computers with neural networks have just formed in recent years and we're only at the beginning."