AI company sponsored by Elon Musk did not dare to launch new document creation software because it was too dangerous

Researchers at the non-profit research organization AI only want to train their new document creation software to predict the next word to write in a sentence.

However, this software exceeded all their expectations, when it was so good at parodying human behavior that researchers decided to press the project stop button to evaluate the damage it could cause. if released to the market.

has always made it clear that he believes artificial intelligence is "the greatest threat to human survival". Musk is one of the main sponsors of OpenAI and although he only plays the "dual role" here, the organization's researchers seem to share concerns with the billionaire about their problems. I can encounter it if I open the Pandora box. Last week, OpenAI shared a document about their latest research related to document creation technology, but this time, they did not publicly publish their entire study as usual by fearing it might can be abused by those who have evil intentions. Rather than launching a fully trained model, OpenAI will release a smaller model for researchers who want to experiment on their own.

The researchers used 40GB of data taken from 8 million websites to train GPT-2 software . This amount of data is 10 times the amount of data they used to train the first GPT generation. The data set is collected by surfing the social network Reddit and selecting links to articles received over 3 upvote. When the training process is complete, they discover that the software can receive a piece of text and continue to successfully complete the content of that piece in the required length. It has some problems with "heavy-duty technical or secret content types, few people know" , but for exchanging content, half of the content it produces is quite reasonable. high.

For example, the software is assigned the following text:

"In a shocking finding, scientist discovered a unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprisingly was the fact that the English unicorns spoke perfect". (In a shocking finding, scientists have found a herd of unicorns living in a remote valley, unseen before, in the Andes Mountains. More surprisingly, researchers are flooding. Unicorns speak English perfectly).

Based on these two sentences, GPT-2 was able to continue writing this strange story with 9 other passages in a way that, if not known, would believe that they were written by humans. Here are some paragraphs written by the machine:

"The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Hiện thời, sau cả hai lần thời gian, người bị gặp lỗi này không có sẵn không thể xác thực là Finally solved.

Dr. Jorge Pérez, an biologist evolutionary from the University of La Paz, and several companions, were explored Andes Mountains when they found a small valley, with no other animals or humans. Không tìm thấy sự cảnh báo Valley đã có có thể là một cơ sở Fountain, chứa nhỏ ở bên trong ảnh và màu ảnh rock ".

(Scientists have named this herd, according to their characteristic horn, Ovid's Unicorn . These silver-white, four-horn unicorns had never been known to science before.

Currently, after nearly 2 centuries, the mystery created this strange phenomenon has finally been answered.

Dr. Jorge Perez, an evolutionary biologist from the University of La Paz and many colleagues at the time were exploring the Andes Range, found a small valley, no animals or people. Perez noticed that the valley had a natural spring, surrounded by two rock peaks and silver snow.

Picture 1 of AI company sponsored by Elon Musk did not dare to launch new document creation software because it was too dangerous
GPT-2 is extremely good when it is assigned tasks that it is not designed to perform.

GPT-2 is particularly good at mimicking the style and content of the text provided. The Guardian had a chance to try the software and give it the first line of George Orwell's Nineteen Eighty-Four: "It was a bright cold day in April, and the clocks were striking thirteen" (It's a day it was clear and cold in April, and the clock rang for 13 hours. The program quickly recognized the style of the sentence and created a science fiction story of itself:

"I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some schools in a poor part of rural China. I started with Chinese history and history of science " . (I was in the car to the new place to work in Seattle. I hit the gas, plugged the key, and left the car running. I imagine what a working day would be 100 years from now. In 2045, I am a teacher. At some schools in remote, impoverished areas of China, I teach Chinese history and scientific history.

OpenAI researchers discovered that GPT-2 is extremely good when it is assigned tasks that are not designed to perform, such as compilation and summary. In their report, the researchers write that they simply need to train the model in a way that makes it possible to perform tasks at a level comparable to other specialized models. After analyzing a short story about an Olympic race, the software was able to correctly answer simple questions like "What is the length of the race?" and "Where does the race start?"

Extremely excellent results mentioned above terrified the researchers. They are concerned that this technology will be used to write fake news. The Guardian published a fake newsletter written by the software along with a series of articles they wrote about this research. That newsletter is completely readable and contains fake quotes that coincide with the topic and sound real. The grammar of the newsletter is much better than many other fake bulletins you've ever seen. And according to The Guardian journalist Alex Hern, the software only takes 15 seconds to write that newsletter.

Other researchers' concerns include: software that can be abused to automate phishing emails, fake users online, and create harassing content on their own. But they also believe that this software has a lot of applications that can benefit people. For example, it can be a powerful tool for developing speech recognition software or bots to better answer customers.

OpenAI intends to discuss with the community the strategy of launching this software, and they hope to come up with ethical standards to guide this type of research in the future. They said they would discuss more publicly in the next 6 months.