Recently, on a clear morning in palm springs, California, Vivian CoE out on a small stage to hold, perhaps, the most nervous presentation in his career. The subject was known to her through. She had to tell the audience about the chips, which were developed in her lab at MIT, and which promise to bring a powerful artificial intelligence on a variety of devices with limited power supply. It is known that a large part of the computational task forces of artificial intelligence today, was held in huge data centers. However, the event — and audience — forced CoE to think.
Artificial intelligence on a chip
MARS — the venue is a luxury conference, which is by invitation only. Robots roll (or fly) at a luxury resort, well-known scientists to communicate with science-fiction writers. Very few scientists are invited for technical presentations, and these sessions should be inspiring and instructive. Meanwhile, people had gathered with about a hundred of the most famous researchers, Directors and entrepreneurs of the world. MARS holds none other than the founder and Chairman of the Board of Directors of Amazon, Jeff Bezos. He was sitting in the front row.
“Audience, so to speak, was pretty high level,” Xie says with a laugh.
Speakers at MARS introduced robots karate, drones insects and even optimistic drawings for the Martian colonies. Chips CoE might seem relatively modest; the naked eye couldn’t tell them from chips, which are available in every electronic device. However, they — possibly — was much more important than anything else that was shown at the event.
New features of the chips
The latest developments in the field of chips — such as those developed in the laboratory of the CE may be crucial for future progress artificial intelligence (AI), including areas those drones and robots that were on MARS. Until recently, for AI relied mostly on the graphics chips, but new equipment can make the algorithms more powerful AI, which will open up new applications. New AI chips could make robots more ubiquitous or storekeepers to let smartphones to create realistic landscape with augmented reality.
Chips and CE at the same time extremely effective and flexible in its design, which is important for a region that is developing rapidly.
These microchips are designed to squeeze more of the algorithms of “deep learning” AI that turned the world upside down. And in the process they can inspire the algorithms for evolution. “We need new hardware, because Moore’s law has slowed, “says Lo, referring to the axiom, introduced by Intel co-founder Gordon Moore, which predicted that the number of transistors on a chip will double about every 18 months.
Now the law increasingly depends on the physical limitations related to engineering components in nuclear proportions. And it stimulates new interest inarchitectures and approaches to computing.
The high stakes associated with investment in the chips AI next generation and the preservation of dominant position of America in the production of chips in General, obvious to the U.S. government. CE microchips are developed with the support of the DARPA program for development of new designs microchips for artificial intelligence. And, of course, this program was created against the background of rapid development of China in the same field.
But innovation in the production of microchips was stimulated mostly by the development of deep learning, a very powerful method of learning machines performing useful tasks. Instead of giving the computer a set of rules to be followed, the machine essentially programs itself. The training data available in most simulated artificial neural network, which then is adjusted to obtain the desired result. With enough training, deep learning can find obscure and abstract patterns in the data. This method is used for a growing number of practical problems, from face recognition on smartphones to predict diseases on medical images.
New race chip
Deep learning is not particularly dependent on Moore’s law. Neural networks perform many mathematical calculations in parallel, so they work much more efficiently on dedicated graphics chips for video games that produce parallel calculations for rendering three-dimensional images. But microchips designed specifically for calculations that underlie deep learning should be more powerful.
The potential of the new architectures of chips for improvement of artificial intelligence has raised the level of entrepreneurial activity, which the chip industry has not seen for decades.
Tesla secretly developed their own chips for artificial intelligence their cars
Facebook plans to build its own chips for the best artificial intelligence
Large technology companies that hope to use and commercialize AI — including Google, Microsoft and Amazon — are working on their own chips deep learning. Many smaller companies are also developing new chips. “It’s impossible to keep track of all the companies that jump into this race for the chips AI,” says Mike Delmer, microarray analyst from Linley Group, a research company. “I’m not kidding: we know of at least one each week.”
A real opportunity is not to build the most powerful chips deep learning, said Xie. An important efficiency, since the AI also need to work outside of major data centers, relying only on the energy available in the battery device.
“AI is everywhere — and find out how to make it all energy efficient, will be extremely important,” says Naveen RAO, Vice President, products of artificial intelligence at Intel.
Hardware Se, for example, is more effective because it physically reduces the problem of where the data to store and where to analyze, but also uses a clever scheme for data reuse. Prior to joining MIT, CoE first applied this approach to improve the efficiency of video compression for Texas Instruments.
In such rapidly developing areas as deep learning, the task of those working on chips for AI, is to make sure that they are flexible enough to be adapted to work with any application. It’s easy to design a super powerful chip that can only do one thing, but such a product is quickly outdated.
Chip CoE called Eyeriss. Designed in collaboration with Joel Amerom researcher at Nvidia and Professor at MIT, the chip was tested with a number of standard processors, to see how he handles a number of different algorithms deep learning. According to an article published last year, combining efficiency with flexibility, the new chip achieves a performance 10 or even 1,000 times more than existing equipment.
Easier chips the AI is already having a significant impact. High-quality smart phones already include chips, optimized to run algorithms in deep learning for image recognition and voice. More efficient chips could enable these devices to handle more powerful AI code with the best abilities. Self-driving cars need a powerful computer chips, as most of the current prototypes rely on a mountain of computers.
RAO says that the chips MIT promising, but the success of the new hardware architecture will be determined by many factors. One of the most important factors, he said, is the development software that allows programmers to run code. “The creation of something useful from the point of view of the compiler — this is probably the biggest hurdle for approval,” he says.
The CE lab also explores the possibility of creating software that will better utilize the properties of existing computer chips. This work goes beyond just deep learning.
Together with Certicom Caraman, from the Department of Aeronautics and Astronautics Massachusetts Institute of technology the CoE has developed a low-power chip Navion, which is incredibly effectively realize the three-dimensional mapping and navigation for a tiny drone. Navion shows that the software in the field of AI (deep learning) and hardware (chips) begin to evolve together, in symbiosis.
Chips Se may not attract attention like waving drones, but the fact that they have shown on MARS, talks about the importance of its technologies for the future of AI. Perhaps, at the next conference MARS robots and drones will be something new inside.