A Neural-Net Based on Light Could Best Digital Computers
Researchers turn to optical computing to carry out neural-network calculations
We now perform mathematical calculations so often and so effortlessly with digital electronic computers that it’s easy to forget that there was ever any other way to compute things. In an earlier era, though, engineers had to devise clever strategies to calculate the solutions they needed using various kinds of analog computers.
Some of those early computers were electronic, but many were mechanical, relying on gears, balls and disks, hydraulic pumps and reservoirs, or the like. For some applications, like the processing of synthetic-aperture radar data in the 1960s, the analog computations were done optically. That approach gave way to digital computations as electronic technology improved.
Curiously, though, some researchers are once again exploring the use of analog optical computers for a modern-day computational challenge: neural-network calculations.
The calculations at the heart of neural networks (matrix multiplications) are conceptually simple—a lot simpler than, say, the Fourier transforms needed to process synthetic-aperture radar data. For readers unfamiliar with matrix multiplication, let me try to de-mystify it.
A matrix is, well, a matrix of numbers, arrayed into rows and columns. When you multiply two matrices together, the result is another matrix, whose elements are determined by multiplying various pairs of numbers (drawn from the two matrices you started with) and summing the results. That is, multiplying matrices just amounts to a lot of multiplying and adding.
But neural networks can be huge, many-layer affairs, meaning that the arithmetic operations required to run them are so numerous that they can tax the hardware (or energy budget) that’s available. Often graphics processing units (GPUs) are enlisted to help with all the number crunching. Electrical engineers have also been busy designing all sorts of special-purpose chips to serve as neural-network accelerators, Google’s Tensor Processing Unit probably being the most famous. And now optical accelerators are on the horizon.
Two MIT spin-offs—Lightelligence and Lightmatter—are of particular note. These startups grew out of work on an optical-computing chip for neural-network computations that MIT researchers published in 2017.
More recently, yet another set of MIT researchers (including two who had contributed to the 2017 paper) has developed yet another approach for carrying out neural-network calculations optically. Although it’s still years away from commercial application, it neatly illustrates how optics (or more properly a combination of optics and electronics) can be used to perform the necessary calculations. [READ MORE]
Comments :