To Speed Up AI, Mix Memory and Processing

New computing architectures aim to extend artificial intelligence from the cloud to smartphones

If John von Neumann were designing a computer today, there’s no way he would build a thick wall between processing and memory. At least, that’s what computer engineer Naresh Shanbhag of the ­University of Illinois at Urbana-Champaign believes. The eponymous von Neumann architecture was published in 1945. It enabled the first stored-memory, reprogrammable computers—and it’s been the backbone of the industry ever since.

Now, Shanbhag thinks it’s time to switch to a design that’s better suited for today’s data-intensive tasks. In February, at the International Solid-State Circuits Conference (ISSCC), in San Francisco, he and others made their case for a new architecture that brings computing and memory closer together. The idea is not to replace the processor altogether but to add new functions to the memory that will make devices smarter without requiring more power.

Industry must adopt such designs, these engineers believe, in order to bring artificial intelligence out of the cloud and into consumer electronics. Consider a simple problem like determining whether or not your grandma is in a photo. Artificial intelligence built with deep neural networks excels at such tasks: A computer compares her photo with the image in question and determines whether they are similar—usually by performing some simple arithmetic. So simple, in fact, that moving the image data from stored memory to the processor takes 10 to 100 times as much energy as running the computation.

That’s the case for most artificial intelligence that runs on von Neumann architecture today. As a result, artificial intelligence is power hungry, neural networks are stuck in data centers, and computing is a major drain for new technologies such as self-driving cars.

“The world is gradually realizing it needs to get out of this mess,” says ­Subhasish Mitra, an electrical engineer at Stanford University. “Compute has to come close to memory. The question is, how close?”

Mitra’s group uses an unusual architecture and new materials, layering carbon-nanotube integrated circuits on top of resistive RAM—much closer than when they’re built on separate chips. In a demo at ISSCC, their system could efficiently classify the language of a sentence. [READ MORE]