Nvidia Chip Takes Deep Learning to the Extremes
There’s no doubt that GPU-powerhouse Nvidia would like to have a solution for all size scales of AI—from massive data center jobs down to the always-on, low-power neural networks that listen for wakeup words in voice assistants.
Right now, that would take several different technologies, because none of them scale up or down particularly well. It’s clearly preferable to be able to deploy one technology rather than several. So, according to Nvidia chief scientist Bill Dally, the company has been seeking to answer the question: “Can you build something scalable… while still maintaining competitive performance-per-watt across the entire spectrum?”
It looks like the answer is yes. Last month at the VLSI Symposia in Kyoto, Nvidia detailed a tiny test chip that can work on its own to do the low-end jobs or be linked tightly together with up to 36 of its kin in a single module to do deep learning’s heavy lifting. And it does it all while achieving roughly the same top-class performance.
The individual accelerator chip is designed to perform the execution side of deep learning rather than the training part. Engineers generally measure the performance of such “inferencing” chips in terms of how many operations they can do per joule of energy or millimeter of area. A single one of Nvidia’s prototype chips peaks at 4.01 tera-operations per second (1000 billion operations per second) and 1.29 TOPS per millimeter. Compared to prior prototypes from other groups using the same precision the single chip was at least 16 times as area efficient and 1.7 times as energy efficient. But linked together into a 36-chip system it reached 127.8 TOPS. That’s a 32-fold performance boost. (Admittedly, some of the efficiency comes from not having to handle higher-precision math, certain DRAM issues, and other forms of AI besides convolutional neural nets.)
Companies have mainly been tuning their technologies to work best for their particular niches. For example, Irvine, Calif.,-startup Syntiant uses analog processing in flash-memory to boost performance for very-low power, low-demand applications. While Google’s original tensor processing unit’s powers would be wasted on anything other than the data center’s high-performance, high-power environment. [READ MORE]
Comments :