Powerful AI Can Now Be Trained on a Single Computer
New machine learning training approach could help under-resourced academic labs catch up with big tech
The enormous computing resources required to train state-of-the-art artificial intelligence systems means well-heeled tech firms are leaving academic teams in the dust. But a new approach could help balance the scales, allowing scientists to tackle cutting-edge AI problems on a single computer.
A 2018 report from OpenAI found the processing power used to train the most powerful AI is increasing at an incredibly fast pace, doubling every 3.4 months. One of the most data-hungry approaches is deep reinforcement learning, where AI learns through trial and error by iterating through millions of simulations. Impressive recent advances on videogames like Starcraft and Dota2 have relied on servers packed with hundreds of CPUs and GPUs.
Specialized hardware such as the Cerebras System’s Wafer Scale Engine promises to replace these racks of processors with a single large chip perfectly optimized for training AI. But with a price tag running into the millions, it’s not much solace for under-funded researchers.
Now a team from the University of Southern California and Intel Labs have created a way to train deep reinforcement learning (RL) algorithms on hardware commonly available in academic labs. In a paper presented at the 2020 International Conference on Machine Learning (ICML) this week, they describe how they were able to use a single high-end workstation to train AI with state-of-the-art performance on the first-person shooter videogame Doom. They also tackle a suite of 30 diverse 3D challenges created by DeepMind using a fraction of the normal computing power.
“Inventing ways to do deep RL on commodity hardware is a fantastic research goal,” says Peter Stone, a professor at the University of Texas at Austin who specializes in deep RL. As well as leaving smaller research groups behind, the computing resources normally required to carry out this kind of research have a significant carbon footprint, he adds. “Any progress towards democratizing RL and reducing the energy needs for doing research is a step in the right direction,” he says.
The inspiration for the project was a classic case of necessity being the mother of invention, says lead author Aleksei Petrenko, a graduate student at USC. As a summer internship at Intel came to an end, Petrenko lost access to the company’s supercomputing cluster putting unfinished deep RL projects in jeopardy. So he and colleagues decided to find a way to continue the work on simpler systems. [READ MORE]
Comments :