These Might Be the Fastest (and Most Efficient) AI Systems Around

Almost 2000 entries ranked in MLPerf’s latest inferencing list

The machine learning industry’s efforts to measure itself using a standard yardstick has reached a milestone. Forgive the mixed metaphor, but that’s actually what’s happened with the release of MLPerf Inference v1.0 today. Using a suite of benchmark neural networks measured under a standardized set of conditions, 1,994 AI systems battled it out to show how quickly their neural networks can process new data. Separately, MLPerf tested an energy efficiency benchmark, with some 850 entrants for that.

This contest was the first following a set of trial runs where the AI consortium MLPerf and its parent organization MLCommons worked out the best measurement criteria. But the big winner in this first official version was the same as it had been in those warm-up rounds—Nvidia.

Entries were combinations of software and systems that ranged in scale from Raspberry Pis to supercomputers. They were powered by processors and accelerator chips from AMD, Arm, Centaur Technology, Edgecortix, Intel, Nvidia, Qualcomm, and Xilininx. And entries came from 17 organizations including Alibaba, Centaur, Dell Fujitsu, Gigabyte, HPE, Inspur, Krai, Lenovo, Moblint, Neuchips, and Supermicro.

Despite that diversity most of the systems used Nvidia GPUs to accelerate their AI functions. There were some other AI accelerators on offer, notably Qualcomm’s AI 100 and Edgecortix’s DNA. But Edgecortix was the only one of the many, many AI accelerator startups to jump in. And Intel chose to show off how well its CPUs did instead of offering up something from its US $2-billion acquisition of AI hardware startup Habana.

Before we get into the details of whose what was how fast, you’re going to need some background on how these benchmarks work. [Click here if you want to skip the background.] MLPerf is nothing like the famously straightforward Top500 list of the supercomputing great and good, where a single value can tell you most of what you need to know. The consortium decided that the demands of machine learning is just too diverse to be boiled down to something like tera-operations per watt, a metric often cited in AI accelerator research.

First, systems were judged on six neural networks. Entrants did not have to compete on all six, however.

  • BERT, for Bi-directional Encoder Representation from Transformers, is a natural language processing AI contributed by Google. Given a question input, BERT predicts a suitable answer.
  • DLRM, for Deep Learning Recommendation Model is a recommender system that is trained to optimize click-through rates. It’s used to recommend items for online shopping and rank search results and social media content. Facebook was the major contributor of the DLRM code.
  • 3D U-Net is used in medical imaging systems to tell which 3D voxel in an MRI scan are parts of a tumor and which are healthy tissue. It’s trained on a dataset of brain tumors.
  • RNN-T, for Recurrent Neural Network Transducer, is a speech recognition model. Given a sequence of speech input, it predicts the corresponding text.
  • ResNet is the granddaddy of image classification algorithms. This round used ResNet-50 version 1.5.
  • SSD, for Single Shot Detector, spots multiple objects within an image. It’s the kind of thing a self-driving car would use to find important things like other cars. This was done using either MobileNet version 1 or ResNet-34 depending on the scale of the system.

Competitors were divided into systems meant to run in a datacenter and those designed for operation at the “edge”—in a store, embedded in a security camera, etc.

Datacenter entrants were tested under two conditions. The first was a situation, called “offline”, where all the data was available in a single database, so the system could just hoover it up as fast as it could handle. The second more closely simulated the real life of a datacenter server, where data arrives in bursts and the system has to be able to complete its work quickly and accurately enough to handle the next burst.

Edge entrants tackled the offline scenario as well. But they also had to handle a test where they are fed a single stream of data, say a single conversation for language processing, and a multistream situation like a self-driving car might have to deal with from its multiple cameras.

Got all that? No? Well, Nvidia summed it up in this handy slide: [READ MORE]