Deep Learning Enables Real-Time 3D Holograms On a Smartphone
New AI technique can rapidly generate holograms with less than 1 megabyte of memory
Using artificial intelligence, scientists can now rapidly generate photorealistic color 3D holograms even on a smartphone. And according to a new study, this new technology could find use in virtual reality (VR) and augmented reality (AR) headsets and other applications.
A hologram is an image that essentially resembles a 2D window looking onto a 3D scene. The pixels of each hologram scatter light waves falling onto them, making these waves interact with each other in ways that generate an illusion of depth.
Holographic video displays create 3D images that people can view without feeling eye strain, unlike conventional 3D displays that produce the illusion of depth using 2D images. However, although companies such as Samsung have recently made strides toward developing hardware that can display holographic video, it remains a major challenge actually generating the holographic data for such devices to display.
Each hologram encodes an extraordinary amount of data in order to create the illusion of depth throughout an image. As such, generating holographic video has often required a supercomputer’s worth of computing power.
In order to bring holographic video to the masses, scientists have tried a number of different strategies to cut down the amount of computation needed — for example, replacing complex physics simulations with simple lookup tables. However, these often come at the cost of image quality.
Now researchers at MIT have developed a new way to produce holograms nearly instantly—a deep-learning based method so efficient, it can generate holograms on a laptop in a blink of an eye. They detailed their findings this week, which were funded in part by Sony, online in the journal Nature.
“Everything worked out magically, which really exceeded all of our expectations,” says study lead author Liang Shi, a computer scientist at MIT.
Using physics simulations for computer-generated holography involves calculating the appearance of many chunks of a hologram and then combining them to get the final hologram, Shi notes. Using lookup tables is like memorizing a set of frequently used chunks of hologram, but this sacrifices accuracy and still requires the combination step, he says.
In a way, computer-generated holography is a bit like figuring out how to cut a cake, Shi says. Using physics simulations to calculate the appearance of each point in space is a time-consuming process that resembles using eight precise cuts to produce eight slices of cake. Using lookup tables for computer-generated holography is like marking the boundary of each slice before cutting. Although this saves a bit of time by eliminating the step of calculating where to cut, carrying out all eight cuts still takes up a lot of time.
In contrast, the new technique uses deep learning to essentially figure out how to cut a cake into eight slices using just three cuts, Shi says. The convolutional neural network—a system that roughly mimics how the human brain processes visual data—learns shortcuts to generate a complete hologram without needing to separately calculate how each chunk of it appears, “which will reduce total operations by orders of magnitude,” he says.
The researchers first built a custom database of 4,000 computer-generated images, which each included color and depth information for each pixel. This database also included a 3D hologram corresponding to each image. [READ MORE]
Comments :