Printing Circuits on Nanomagnets Yields a New Breed of AI

A rare form of matter known as “spin glass” may be opening up a new direction in artificial intelligence in which algorithms can be printed as physical hardware. Researchers at Los Alamos National Laboratory (LANL) have for the first time made an artificial spin glass consisting of nanomagnets arranged in a way that mimics a neural network.

Theoretical models of spin glass have previously been widely used in complex systems, such as those describing brain function or stock market dynamics.

In brief, spin glass is a magnetic state of matter that is characterized by randomness. The spin glass that the LANL researchers developed—published on 17 March in the journal Nature Physics—involves the magnets in the system finding a complex configuration of aligning and anti-aligning with each other to minimize energy use. The magnets used are thin layers of iron-nickel alloy that act like microscopic bar magnets, flipping their north and south poles to find a low energy state. Their positions and orientations were chosen to correspond with the interaction structure of an artificial neural network.

“This is comparable to a supercooled fluid: The molecules want to arrange in a simpler solid—but can’t immediately, because the energy and time to find the ordered configuration aren’t [available],” said Michael Saccone, a postdoctoral researcher in theoretical physics at LANL.

Saccone and his LANL colleagues were able to fabricate and observe the artificial spin glass as a proof-of-principle Hopfield neural network, which mathematically models associative memory to guide the disorder of the artificial spin systems. Hopfield neural networks are mathematical systems made of neurons that are active (set to 1) or inactive (set to -1). The neurons look to the values of all other neurons and, depending on their inhibitory or excitatory connections with each other, decide what state to take on in the next time step. This is similar to spin glasses because the magnets can be in one of two states and rely on their interactions with other magnets to update their states.

This associative memory is a key feature of Hopfield networks because it makes the network capable of linking two or more memory patterns related to an object. In practice this means that if, for instance, a partial image of a face is received as an input, then the network can recall the complete face. This gives Hopfield networks a distinct advantage over traditional algorithms, because they do not require a perfectly identical scenario to identify a memory.

While it seems counterintuitive to picture an AI algorithm taking a physical form—hardware serving as software—Saccone has an explanation. “A classic example of hardware serving as software are slide rule[s],” said Saccone. “The rules of the geometry encode simple arithmetic.”

Saccone further explained that the approach they are taking is slightly different than the slide rule—an old-fashioned, mechanical analog “computer” that features logarithmic scales of numbers on parallel tracks that can be used as a kind of pre-electronic calculator. “We’re mapping the energy functions onto one another,” said Saccone. “Simply put, the lowest energy state in our physical system represents a solution to another, analogous problem.” [READ MORE]