When the AI learns to talk?

Every day the most advanced system of artificial intelligence becomes smarter and smarter, gaining new knowledge and skills. The AI is already capable in many areas to be better people. But behind all this “superiority” hides only the lines of code and precise algorithms that do not allow the software to be “free in their thoughts.” In other words, the machine cannot do what it is not incorporated. AI can come to logical conclusions, but not able to talk on a given topic. And it seems that this will change soon.

 

As people learn about the world

We, like all sentient organisms, learn about the world around gradually. Imagine a year-old baby will see how a toy truck drives off the platform and hangs in the air. For him this is not anything unusual. But do the same experiment with just two or three months later, and the little man immediately realizes that something is wrong. After all, he already knows how gravity works.

“Nobody tells the child what the objects are supposed to fall,” says Ian Lecun, the head of Facebook in the development of artificial intelligence and a Professor at new York University. — “Much of what children learn about the world, they learn through observation.”

And, as simple as it may sound, this approach can help developers AI to create a more advanced version of artificial intelligence.

Why is it so hard to teach the AI to talk

Deep machine learning (that is, roughly speaking, receive certain skills by trial and error) now allows AI to achieve great success. But most importantly artificial intelligence to do so far is not capable. He can’t talk and draw conclusions based on the analysis of objective reality in which he exists. In other words, the machines really don’t understand the world, making them unable to interact with it.

This is interesting: Can artificial intelligence beat human poker?

One way of improving AI could become a kind of “shared memory”, which will help cars to get information about the world around and gradually learn it. But it does not solve all problems.

“Obviously, we’re missing something,” says Professor Lecun. “The child can develop an understanding of how adult elephants and their cubs after see all 2 photos. While algorithms for deep learning should be viewed thousands, if not millions, of images. A teenager can learn to safely drive a car, practicing a few dozen hours and figure out how to avoid accidents, but the robots have to roll tens of millions of hours”.

How to teach the AI to talk

The answer, according to Professor Lacuna is undervalued sub-heading of deep learning known as unsupervised learning. When algorithms based on supervised and intensive training, teach the AI to achieve the goal using data input from the outside, uncontrolled developing behavior patterns independently. Simply put, there are 2 ways to teach a robot to walk: first to enter into the system all parameters, based on the structure of the robot. The second — “to explain” the principles that is walking, and to force the robot to learn on their own. The vast majority of existing algorithms operate on the first path. Ian Lecun believes that the focus should be shifted towards the second method.

“Researchers have to start with learning algorithms is prediction. For example, to teach the neural network to predict the second half of the video, after watching only the first. Yes, in this case, errors are inevitable, but this way we teach the AI reasoning, expanding its application. Returning to the example with the child and the toy truck: we have 2 possible outcomes — the truck will fall or crash. “Flip” neural networks are still a hundred other such examples, and they learn to build a logical relationship and eventually learn to talk”.

Advertisements