''Computer intelligence will cause problems. But robots won’t enslave us forever''
Futurist Aleksey Andreyev about the new language of Facebook bots and future ‘‘rise of the machines’’ and threats of smart houses
These days Facebook admins had to shut down one of the AI systems because bots suddenly started to speak their own language, which people don't know, instead of English. A bit earlier Facebook founder Mark Zuckerberg bet with Tesla and SpaceX creator Elon Musk on the end of the world because of artificial intelligence. Realnoe Vremya reached out to Aleksey Andreyev, expert at Positive Technologies working in information security, who told us whether we should get into a panic because of the recent story with the Facebook bots, the main difference of AI from human's intellect and whether human kind should get ready for a ''rise of the machines''.
''Microsoft thought that people would teach it something good, but was it vice versa''
Aleksey you perfectly know that experts' reaction to the news that Facebook shut down one of the AI systems differently: some of them are sure there is nothing terrible in it and it is logical, others raise the alarm and say about a global threat to human kind. Some people believe it is an invented story. What is your reaction? What do you think about what happened?
I think this news is about 40 years old. Such chatbots have being created for a long time. It is enough to remember famous ELIZA language processing programme written by Joseph Weizenbaum in 1966. This looping that happened on Facebook has happened to bots many times in the last 50 years.
Such looping happens for clear reasons: the thing is that these robots don't have any consciousness – they just imitate similar texts of answers. The simplest robot imitates answers by putting our usual speech to patterns. For instance, if you say to the bot: ''I worry about my exams'', it takes the word ''exam'' from your text and makes up a phrase with it. For instance, ''Tell more about your exams.'' The robot imitates a dialogue by taking words from the previous phrases of the interlocutor.
''Microsoft turned such a bot called Tay off. Users immediately taught it indecent and politically incorrect statements.'' Photo: thenews.by
Now robots expand the set of their patterns because they can find different typical phrases on the Internet. If you say to him: ''How are you?'', the robot will search on the Internet and find all possible dialogues with these words, choose the most popular answer, for instance, ''Fine, and you?'' and put this cliché. When two such robots are opposed to each other, the following happens: they start to ''throw'' patterns to each other and they gradually run out of words. It is like people talking to newspaper pieces that will inevitably end. It is what happened in case of Facebook. It is just good PR – people invented it is a new language. But, actually, it has already happened in the last 50 years, and even there is scientific evidence that the majority of chatbots will be included in a dialogue with the same bot.
The cases when bots that learn while talking to people were turned off are more interesting – they were really strange because people taught them strange things. Last spring Microsoft turned such a bot called Tay off. Users immediately taught it indecent and politically incorrect statements. Again, there was not any intelligence there – just a set of patterns. It seems that Microsoft thought that people would teach it something good, but was it vice versa.
So superintelligence and chatbots don't have any problem – it is likely to be the mirror of our own values in communication. Robots learn like how people sometimes brood when they have nothing to talk about, how people learn to quarrel. These are patterns, that's it.
''Elon Musk also deals with artificial intelligence. Tesla tests the autopilot system. And there already were cases when people crashed because of this system. Musk wants to be up to date with the upcoming law in this sphere.'' Photo: leccar.com
''A hacker just needs to try factory settings the user did not change while passing by windows ''
If everything is so simple, why was there a debate between Zuckerberg and Musk about this situation?
In this case, we are talking about two media persons. What they say in public and what they actually do are a bit different things. They reflect tendencies that happen in those businesses they work in. Facebook really uses artificial intelligence and can do harm to people to a certain degree because this social network collects a big amount of personal data trying to figure your tastes out and sell them. What do such systems do? The answer is obvious – they sell information about us.
If you want to sell car spare parts, Facebook will find you people who drive certain cars and live in Moscow or Kazan. It is all collected and sold to the advertiser. Such artificial intelligence already exists, it works and influences us invisibly by suggesting different kinds of things. In fact, it is such a search service. It becomes more knowing with a mobile phone because it knows our routes, purchases, it locates you and knows our desires and needs.
Elon Musk also deals with artificial intelligence. Tesla tests the autopilot system. And there already were cases when people crashed because of this system. Musk wants to be up to date with the upcoming law in this sphere. Actually, people have cared about legal regulation of this sphere in many countries in the recent times: the administration's report on the Future of Artificial Intelligence was issued in the States last year, the British Standards Institution issued recommendations on creation on ethical robots, the EU is preparing an analogous project. Now people are trying to impose restrictions, and it is likely to happen in the short run because we already have examples of how such systems affect us. For instance, there is scientific research on how high-frequency trading robots that work in exchange provoked the crisis in 2008.
And what Musk and Zuckerberg say need to be considered as foam on the surface of a big wave. The very wave is much deeper and more interesting.
''A simple modem can become a threat, a weapon, you know. A person just did not change the password, and a hacking machine surfing on the Internet and trying passwords (it is able to try million passwords per hour), can connect thousands of such hacked home routers or web cameras (now even security cameras are hacked this way) and make different bot nets form them.'' Photo: newsnn.ru
Whom do you support – Musk or Zuckerberg?
I would likely say that I don't support their position because both create artificial intelligence and neither of them seriously says about restrictions. As a person working in security, I perfectly understand that if we have problems with simple devices now, for instance, modems that people turn on in their houses… A simple modem can become a threat, a weapon, you know. A person just did not change the password, and a hacking machine surfing on the Internet and trying passwords (it is able to try million passwords per hour), can connect thousands of such hacked home routers or web cameras (now even security cameras are hacked this way) and make different bot nets form them. It might seem: ''Just change the password, don't leave ''123456'' as it was written when you bought a device, but people don't do it. Millions of devices are hacked this way.
Now imagine what can happen if people use artificial intelligence at home or in mobile phones. It becomes very difficult to understand what is going on with your device. For instance, you are talking to a bot, it advised you something or upped and turned some device on in your house. And you don't understand whether it is a real bot or it has been hacked now and another person manages it. A completely new level of danger that did not exist will appear. From a perspective of security, I am on the opposite side of Zuckerberg and Musk, of course, because they try to sell artificial intelligence and we, people who deal with security, will have to handle it.
Aleksey, I remembered an episode from Mr. Robotseries where hackers cracked the system of a smart house and made the life of the house owner hell. It is already possible nowadays?
It is already happening, and it is not fantasy at all. Many sold smart house systems are not safe. In our company, experts do right such research. The systems based on Wi-Fi, Zigbee and other old wireless communication protocol are especially unsafe. Unfortunately, people themselves don't care about security settings, and sellers don't explain them how to do it.
The very system can be very good inside. For instance, you change passwords and keys to encrypt the connection of your mobile phone with all the devices well. But, unfortunately, people don't do it in fact. A hacker just needs to try factory settings the user did not change while passing the windows by. Then he will get access to all these devices like the very owner. Another thing is that now smart houses are niche products. This is why they are not many, but the problem is really big.
''Now imagine what can happen if people use artificial intelligence at home or in mobile phones. It becomes very difficult to understand what is going on with your device.'' Photo: abitant.com
Aleksey, what do you think is the disadvantage of improvement and maximum development of AI for human kind? What will happen when it becomes more powerful, stronger and smarter?
As I think, nowadays queueing systems can cause us troubles. Search system is the simplest one. Now it can rule us somehow, or some people can do it with its help. Certain data can be hidden in search or other data can be given to you – it is already taking place, even there are special professions. Now the search system is not active because you ask it. However, another step is the appearance of active systems that having collected data offer you something or even participate in public life like AI that is used to process big data in some big companies.
For instance, there is a company granting loans. And a special system analyses people's data where they send requests not to give money to swindlers. There are programmes that try to learn with criminals' faces or information about them to predict who will be the next criminal – how it was in Minority Report. Such things already exist, they already work and there are not any Asimov's Laws that would restrict them. By the way, it is written in the recommendations I talked about earlier. Already there are systems that start making conclusions according to their statistical data. For instance, the system granting loans detects that men return loans three times less than women. Such a system can gradually stop granting loans to men. Or let's put an example of the system analysing criminals' data: if it founds out the district with the greatest number of dark-skinned criminals, it can generalise this data and start showing dark-skinned people as suspicious more than others. It can create serious problems because these systems don't have limits.
Development of hacking systems that can be used in large-scale attacks is also an unpleasant thing. Again, they are taught very quickly. They have a serious competition, real evolution – hackers' tools and protections instruments improve, they move each other. Now even a school kid having a hacking tool in his hands can stage quite a big attack. In addition, the very school kid won't know anything about programming at all. He will just need to download an app and launch it.
''Machines can't rule the world – they perform a certain task, then they will have nothing to do''
To tell the truth, there is an impression now that everything is bringing to a full-fledged implementation of unmanned vehicles in the next years: creation of lanes on roads is discussed, a law is created…
They will be implemented, of course. But I think powerful juridical limits need to be imposed: who will be responsible for accidents of such cars and who can use them, in general? Or people will have to learn to be aware while using such systems. I think this system worked much better as assistant. For instance, there are programmes that can notice that the driver is falling asleep and give him a tip. Of course, the system can create routes and do other things. But the driver can't be completely turned off.
''Computer intelligence will cause problems. But robots won't enslave us forever''. We would rather be afraid of fast but global problems: stock crash or, for example, blackout in a big city.'' Photo: ctvnews.ca
Aleksey, do you believe in screen scripts (SkyNet, Matrix)? Can scripts from fiction films with conquest of the world repeat?
I think all these scripts have a kernel. However, we should understand that the human evolution had many technological rises. Before the appearance of cars, people thought cities will be full of dung: 'If there is so much dung in London streets, the entire city will sink in it in several years when there are more horses''. Then cars appeared, horses disappeared, and another level of problems arose.
Of course, computer intelligence will cause problems. But robots won't enslave us forever. We would rather be afraid of fast but global problems: stock crash or, for example, blackout in a big city. It can happen right with the help of intellectual systems. Big catastrophes are quite possible. But machines can't rule the world – they perform a certain task, then they will have nothing to do. But people who know how to use artificial intelligence as a tool are able to create many problems for other people.
Подписывайтесь на телеграм-канал, группу «ВКонтакте» и страницу в «Одноклассниках» «Реального времени». Ежедневные видео на Rutube, «Дзене» и Youtube.