Intel has done pretty well for itself by consistently figuring out ways of making CPUs faster and more efficient. But with the end of Moore’s Law lurking on the horizon, Intel has been exploring ways of extending computing with innovative new architectures at Intel Labs.

Quantum computing is one of these initiatives, and Intel Labs has been testing its own 49-qubit processors. Beyond that, Intel Labs is exploring neuromorphic computing (emulating the structure and, hopefully, some of the functionality of the human brain with artificial neural networks) as well as probabilistic computing, which is intended to help address the need to quantify uncertainty in artificial intelligence applications.

Rich Uhlig has been the director of Intel Labs since December of 2018, which is really not all that long, but he’s been at Intel since 1996 (most recently as Director of Systems and Software Research for Intel Labs) so he seems well qualified to hit the ground running. We spoke with Uhlig about quantum, neuromorphic, and probabilistic computing, how these systems will help us manage AI, and what kinds of things these technologies will make possible that should concern us at least a little bit.

IEEE Spectrum: According to Intel’s timeline of quantum computing, we’re currently in the “system phase.” What does that mean, and how will we transition to the commercial phase?

Rich Uhlig: At Intel, we’re focused on developing a commercially viable quantum computer, which will require more than the qubits themselves. We have successfully manufactured a 49-qubit superconducting chip, which allows us to begin integrating the quantum processing unit (the QPU) into a system where we can build all of the components that will be required to make the qubits work together in tandem to improve efficiency and scalability. Instead of focusing on the hype of qubit count, we are working to create a viable quantum system that will scale from 50 qubits to the millions of qubits that will be required for a commercial system.  [READ MORE]