Artificial intelligence is the most critical (in all senses) the technology of our time. Chips of artificial intelligence — the most critical infrastructure for artificial intelligence. Based on those two assumptions, the impact that Graphcore plans to mass produce in the world, defies description. How to expand the boundaries of Moore’s law with the advent of the IPU? What hardware and software we are waiting for? One thing is certain: Nvidia should be afraid of and to worry.
If luck can be called the ability to be in the right place at the right time, we can say that we are the lucky ones. Graphcore is a popular name in the world of chips AI, has long appeared on the radar of major tech publications. Publisher ZDnet was able to talk with the founders Graphcore before they presented the latest news.
Graphcore if you did not know, just received another $ 200 million of financing from BMW, Microsoft and leading financial investors in scaling the most advanced chip in the world of AI. Graphcore is now officially “the unicorn” with a valuation of $ 1.7 billion. Among the company’s partners — Dell, Bosch and Samsung. It is not difficult to guess what is brewing something very big. But let’s order.
Learn how the brain works is one thing. To simulate the chips another
Graphcore is based in Bristol, UK, and was founded by veterans of the semiconductor industry Nigel Toon, CEO, and Simon nolson, CTO. Earlier Thun and Knowles has worked with such companies asElement14, and Icera, which reached a total value in billions of dollars. Tong is confident that they can — and will — flip the semiconductor industry is stronger than ever before, breaking down the practical monopoly of Nvidia.
Nvidia is a major player in the field of AI, due to its GPU chips, and it is developing. In this area there are other players, but Tong believes that only from Nvidia is a clear, consistent strategy and effective product on the market. There is still Google, which invests in the chips AI, but Toon argues that Graphcore is a leading advantage and a fantastic opportunity to build an Empire with chips IPU (Intelligent Processor Unit). As an example, he cites the success of mobile ARM processors.
To understand the source of his confidence, the confidence of its partners and investors, we need to understand what makes Graphcore and what distinguishes it from its competitors. Machine learning and artificial intelligence — the most rapidly growing and critical technologies. Machine learning, which is the basis of artificial intelligence in our day, is very effective in finding patterns and regularities, and is based on the combination of appropriate algorithms (models) and data (training sets).
Some people call the artificial intelligence of the matrix multiplication.such extreme statements is questionable, the fact remains: most of machine learning is related to effective data operations to scale. That is why the GPU is so good with loads of machine learning. Their architecture was initially designed for graphics processing, but it proved to be extremely efficient, and data operations.
What did Graphcore? Invested in a completely new architecture. That is why the Tong believes that she has the advantage over other options. Thun notes that the competition effectively built specialized chips (ASIC) that do well with certain mathematical operations with the data optimized for certain tasks. But for tomorrow’s workloads, this is not suitable.
What is so special about native architecture Graphcore? Say Graphcore creates a neuromorphic chip AI: CPU, created in the image of the human brain, with its neurons and synapses, reflected in the architecture. But Knowles dispels this view:
“The brain is a great example for computer architects in this new bold endeavor of machine intelligence. But the strengths and weaknesses of silicon are very different from the computational properties of the moist toppings. We copied the works of nature, neither in aircraft nor in movement on the surface or in the engines because our other engineering materials. It’s the same with computing.
For example, the majority of neuromorphic computing projects in favor of communication via electrical impulses in the brain. But the basic analysis of the efficiency of energy immediately concludes that an electrical surge (two peaks) is two times less efficient than the transmission of information with a single peak, so following the brain will not be a good idea. I think, computer architects must seek to discover how the brain computes, but it should not literally copy in silicon”.
Breaking Moore’s law, surpassing GPU
Energy efficiency is indeed the limiting factor for neuromorphic architectures, but it’s not limited. Commenting on Moore’s law, Tun said that we far exceeded all expectations and we still have 10-20 years of progress in stock. But then we reach some fundamental limits.
Thun believes, we have reached the lowest voltage that can be used in such chips. Therefore, we can add more transistors, but to make them much faster can not. “Your laptop running at 2 GHz, it’s just more cores. But we need thousands of cores to work with machine learning. We need a different architectural process to design chips other ways. The old methods will not work”.
Tun said that the IPU is a universal CPU machine intelligence, specially designed for machine intelligence. “One of the advantages of our architecture is that it is suitable for many modern approaches to machine learning such as CNN, but it is highly optimized for other machine-learning approaches like reinforcement learning, and others. The architecture of the IPU allows us to surpass GPUs — it combines massive parallelism with more than 1,000 independent processor cores on IPU and the built-in memory, so the entire model can be placed on a chip”.
But how IPU can be compared with the Nvidia GPU in practice? He recently released some of the tests machine learning, in which Nvidia seems to have won. But as noted by Thun, the data structure for machine learning is different, because they are more multidimensional and complex. Therefore, they need to work differently. The GPU is very powerful, but not necessarily effective in working with these data structures. You can create 10, and 100 times faster model.
However, speed is not everything, what you need to succeed in this game. Nvidia, for example, succeeded not only because of its powerful GPU. A large part of its success lies in the software. Libraries that allowed developers to abstract from hardware peculiarities and focus on optimizing their machine learning algorithms, have become a key element in the success of the company.
The revolution graph is about to begin
Of course, you already began to wonder what kind of graphs. What kind of structure, model and formalism uses Graphcore to represent and work with these graphs? Can we call them graphs knowledge? The good news is that the wait is long.
“We just call’ em computational graphs. All machine learning model is best expressed in a graph — how it works and TensorFlow. Just our counts by several orders of magnitude more complicated, because we have concurrency by several orders of magnitude to work with graphs in our chips,” says Thun.
Tong promises that over time Graphcore will give developers full access IPU open source to their optimized libraries of graphs so they can see how Graphcore creates apps..
Graphcore is already supplying production equipment to first customers in early access. Now Graphcore sells PCIe cards that are ready-to-connect-to-server platforms, called C2 IPU Processor. Each contains two processors IPU. The company also is working with Dell on attracting corporate customers, and cloud customers.
The product will be widely available next year. The initial focus will be on data centers, cloud solutions and a number of peripheral applications that require large computational resources, like Autonomous cars. For consumer devices such as mobile phones Graphcore not yet focused.