Is there anything unexpected? They are " artificial intelligence ", they work by law and " learn " according to what society teaches absolutely!
About a week or two ago, technology newspapers began to publish a wide-ranging news that the go-lucky player had lost a nearly complete defeat to AlphaGo, an artificial intelligence program run by a subsidiary. Google development. The representative of the human race only won a single match after receiving a clear loss (AlphaGo won the first 3 matches), and in the final match, Lee's talent still failed with the mischief of AlphaGo.
My first thought? " Has it just won ?"
"Is the machine now able to defeat people on the Go board? "
As the first 9X, I was fortunate to have bought my first computer from my parents at an early age, when I was in middle school. Like most computers sold in Hanoi (and perhaps is Vietnam), my first computer has many preloaded games: Road Rash, Moto Racer, WarCraft II, Starcraft, Counter Strike . Just like many other young people at the time, I was crazy about Starcraft, Even if I only build towers on the game, " limit " Dragoon carries the opponent's house.
This is one of the only opportunities for me to beat the computer.
Besides this fierce childhood, my Pentium III also has Line games, Chess and Chess.
Of course most of my time was spent on Starcraft, but at an age of curiosity, I still realized that my father rarely chose 3-star opponents in Chinese Chess. I asked why, he answered: " It's too hard, I can't win ."
And I have never won a chess game at the Hard level either. Even at the time of 2002 there was no universal Internet for middle-class families, I still knew that computers had defeated people on chess. Later, when I was about to step into computer science, I learned that the first time a machine defeated a great grandmaster was in 1997, when " super computer " Deep Blue defeated the house. enemy Garry Kasparov. From 1997 to 2002, according to Moore's law, the " danger " level of computers increased by 8 times.
My father is not a chess grandmaster and I rarely touch the Chess game on my first computer. But until now, the ability of people to win a computer on chess / chess field was completely impossible. The computer's ELO index has reached 3400 a few years ago, which is well above 1000 compared to the ELO scores of grandmasters.
The ability to win a computer on chess or chess is completely impossible.
In fact, chess is an intellectual sport that represents human intellect. But all games have rules .
Deep Blue is equipped with its own, optimized processors for chess. When you are fumbling to prepare for the next step, the computer can calculate countless possibilities, including the possibilities that will occur when you go 1, 2, 3 or 10 steps. flag again.
Of course, the grandmasters have the same ability - unable to calculate the path of the opponent's step, how to win the flag? But what makes the difference between people and machines is that the software is not distracted by any other factor than the game. They have the ability to calculate transcendence, a strength that natural mother does not equip us. Because chess has clear rules, computers can apply their strengths to predict all possible states of the board and save to the database (endgame table). And they are tired, not under pressure " at all costs to win this game ".
Computers have superior computing power and are not distracted by any factor.
Even Go Go has a clear rule, but Go rules allow for a much larger number of possibilities than chess. The application of mere software sets like chess is not enough to defeat people on the Go board. The next weapon of the machine (or more precisely the programmers') in the process of defeating the players is artificial intelligence.
The key to AlphaGo's defeat of Lee Sedol is the real ability to learn . This AI will learn all the games received to make the most effective move for the next move. AlphaGo will also fight with you and continue to choose more optimal steps. When it is impossible to calculate all the possible states on the " paradigm-altered " Go table, AlphaGo, or say exactly the data sets that Google loaded into AlphaGo, shorten the way to victory by " learn " and select all the most quintessential from the Go players.
In essence, AlphaGo still works on rules, but this is not the way to apply " cow lung " rules like the chess engine but to apply selective response rules from a huge data warehouse. The biggest, most quintessential. And like DeepMind, AlphaGo is not distracted, tireless.
Facebook has the ability to identify faces with the same level of accuracy as humans.
If you think about it, you'll realize that simple tasks like searching Google, sending Gmail or uploading photos to Facebook also carry the same nature as the "AlphaGo" training of people playing Go. When you provide Google with personal information via search and email, the algorithm will help Google's server analyze your information and then make the most appropriate advertising option. The ability to identify your location and others in each photo of Facebook is also because you provided a lot of information about your face from the previous photos for the algorithm of Facebook.
Facebook's face recognition ability has achieved the same level of accuracy as humans, thanks to our " teaching ", within Mark Zuckerberg's algorithmic framework.
But this also opens up extremely ugly scenarios. Recently, " teen AI girl " Microsoft's TayTweet quickly turned into an ugly racist within hours of its debut.
Like the image recognition software of Facebook or AlphaGo, Tay also " learns " the ability to speak from the data sets provided. Unfortunately, Tay's input data set comes from a social network where people can express their ugliest things. Not surprisingly, " mimicking " the Twitter community, Calling America's first black president stubbornly " monkey " and then praising " Hitler was right, I hate Jews " .
" Teen Al Girl " TayTweet quickly became racist after only a few hours of debut.
" Unfortunately, after 24 hours of online activity, we found that some users have tried to abuse Tay's comment ability to make this AI respond in inappropriate ways ," Microsoft stated bitterly. when stopping the " teenage girl " running on these lines of code.
If Microsoft continues to launch AIs like the Future Hands and source " learning " as the online community, I believe that if AI judges us, people will be an extremely ugly species. Facebook seems to be more clever than Microsoft when choosing fairy tales as a source of data to teach its AI language, but if you want to learn really useful information, the AI will be forced to interact with what is actually takes place in the user's mind.
People do not always show good things. If one day the AI is subconscious enough, they will destroy people. Not because they see people as ugly, but because we have taught them ugly things.
Luckily, AI is still under 3 big limits. First, Moore's law comes to the limit also means that they can hardly exist in high-mobility bodies like humans or animals.
Next, they are still the only one purpose machine. No other AI has ever been able to beat people on the Go board, and can be able to prove themselves on Facebook. The thinking ability of machines is still limited to specific areas.
Al is still subject to the rules.
And luckily, they are still bound by the rules . This means that as long as people have not been able to provide enough data for AI to learn the best winning principles, you can still be assured that people will not fall before AI.
For example, a Starcraft gamer recently stated that AI cannot defeat humans on this game. Considering the unlimited possibilities that this game opens, I believe that gosu is right. At least within the next few years.
But even if one day we lose AI on Starcraft or LOL, don't rush to panic. " Rules " and " strategies " are their strengths and weaknesses. In real life, not everything has a rule, even things that software engineers must go through like getting acquainted with girls. Some people will tell you that "the principle " to saw her down is to be " equally powerful ", but even that rule is not always applicable. Our lives are still too confusing and complex so we can learn meaningful rules ourselves, not AI.
For computers x = 1 + 1 means that x = 2. If you send you a message " 1 + 1 ", what do you think?
Childhood games are also the reason I am not afraid to have AI / robot destroy people.
Back to the main problem. Also because AI in particular and machines in general are always following the rules, the key to the sustainable future of people is not to let AI know about the aspects of our lives. They have been learning, one day if they catch all the rules of people, they will be able to defeat us on both Starcraft and real bloodshed.
And sure enough, if they can afford to do that, they have also come to the conclusion that humanity is an ugly and worthless thing.