Bo Ewald—Scientist, Pilot, Action Hero—Sits Down to Talk Quantum Computing With Whurley.
If you’ve never met Bo Ewald, he’s super impressive and an all-around nice guy. He’s President of D-Wave International, which makes him a perfect person to ask a few questions about quantum computing.
Here’s Bo’s bio:
Robert “Bo” Ewald leads D-Wave’s international business as President and is responsible for global customer operations for the company. Mr. Ewald has a long history with other leading technology organizations, government projects, and industry efforts. He has experience in large and startup businesses having been the CEO of visualization and HPC leader Silicon Graphics Inc., President of supercomputing leader Cray Research, President and CEO of Linux pioneer Linux Networx and Executive Chairman of Perceptive Pixel, Inc. He started his career at the Los Alamos National Laboratory where he led the Computing and Communications Division. He has served on the boards of directors of both public and private companies and has participated in numerous government and industry panels and committees. He was appointed to the President’s Information Technology Advisory Council by both the Clinton and Bush administrations.
I was lucky enough to get to ask Bo about the hurdles we’re facing, probabilistic algorithms, and D-Wave’s role in making quantum computing a reality.
Quantum computing is incredibly difficult to explain. Give me your elevator pitch.
Only if it is an incredibly long elevator ride like to the top of the Burj Kalifa in Dubai! But the 50-floor version is that the universe is quantum, we just don’t experience it that way. In traditional digital computing, we’ve fought against quantum effects since they can cause current leakage, bit flips, and other things that disrupt the orderly, deterministic digital computer designs. In quantum computing, we try to harness the quantum effects of superposition, entanglement, coherence, quantum tunneling, and friends to do something computationally useful.
Like different architectures in digital computing (RISC, CISC, etc.) there are two major architectures currently being used/developed in quantum computing. They are the quantum annealing architecture (D-Wave and others) and a gate model architecture (IBM and others). In early work, the annealing architecture seems best suited for particular types of optimization, machine learning, sampling/Monte Carlo, and quantum material modeling since you can think of it as finding the lowest valley in an energy landscape. The early gate model architectures might be best applied to materials science and chemistry applications and enable the user to create a set of quantum gates that implement their algorithm.
What are the biggest engineering challenges in quantum computers?
Both of the current architectures (annealing and gate) are striving for more qubits. D-Wave has ~2000 qubits with plans for ~5000 in the next generation and some of the gate model architectures have ~5-15 with plans for ~50 in the next few months. However, the qubits have different characteristics and are used differently by the two architectures so the pure qubit count is not a great direct comparison.
To make the annealing architectures more useful, the next engineering challenge is to figure out how to dramatically improve the connectivity between qubits (while simultaneously improving precision, reducing noise, etc.). To increase the generality of the gate model systems, in addition to increasing the number of qubits and improving their characteristics, determining how many error-correcting qubits will be needed and adding them will be the next chasm to cross, I believe.
Will they replace classical computers? How does quantum computing compare to classical computing?
For most applications that I can imagine for the next few years, I don’t think quantum computing will replace classical computing, but will rather be used alongside classical computers. I think the history of GPUs is instructive. About 20 years ago, Nvidia introduced its first GPU, which was an accelerator targeted at high-end graphics and games. After a few years the architecture became more general purpose, took on parts of HPC applications, and now is also used in machine learning applications. Perhaps with one or two exceptions, quantum computing will follow a similar path, I believe. Today’s quantum “accelerators” will become more general purpose over time.
What will quantum computers not be suited for?
Following the accelerator analogy, I think that quantum computers, HPC machines, GPUs, and perhaps neuromorphic computers will all be used together, depending on the application. However at least for a time, I think that numerically intensive applications like computational fluid dynamics, finite element analysis, etc. will probably remain the domain of HPC/GPU systems. But, if there are parts of an application that involve optimization or materials, perhaps a quantum system will handle that part of the calculation. I think the same thing will be demonstrated for some machine learning applications. Quantum computers may be best at helping understand and train, HPCs/GPUs best at productions, and neuromorphic systems best at certain models.
Can quantum computing help address the Moore’s law ceiling?
Yes, in the same way that GPUs sort of leapfrogged the HPC version of Moore’s law, and they eventually converged to be combined with traditional CPUs in HPC systems, the same will probably be true for quantum computing. D-Wave has been able to introduce systems that double the number of qubits every two years or so, that doubling will sound roughly familiar to Moore’s law followers.
What trends do you see in high-performance computing that will interact with quantum computing?
Most of them, and I think your term “interact” is just right. As we discussed, I think today’s quantum systems are generally complementary to HPC systems. We don’t do numerically intensive computing well, but we may be able to help accelerate parts of a code. Sometimes we also think that quantum computers should be able to handle “big data” problems easily. Not yet. Because of their very nature, quantum systems “consider” a large set of possible solutions to a problem, but the volume of input data is quite small. So, traditional computers will handle the big data part and quantum systems will deal with a subset.
How do you get companies and developers to adopt this new technology? What are the building blocks for creating a new marketplace for quantum computing?
More than anything, we need two sets of smart people thinking about and working on quantum computers. One set thinking about how to apply quantum computers to real-world problems, or how to apply them to problems that we can’t even contemplate solving today. The second set working on software tools to enable subject matter experts to use them more easily. With the group of people working on the software environment/tools, I believe that there is a role for some early standards to help speed development of the environment. That will also help accelerate the development of applications so that people don’t have to program in the equivalent of machine or assembly language and can think about their real problem in a higher level way.
Fortran started that for scientific and engineering applications about 50 years ago, C/Unix and friends enabled systems development to be done more easily, etc. Linux enabled an entirely new set of people, and the open source community further makes computing more accessible to more people. Quantum computing can and should do the same, and build a new environment and tools on the shoulders of classical computing (which has been developing for a long, long time by comparison).
What are the realities that quantum computers will have to overcome in the marketplace?
This is easy to say, a little harder to do. Quantum computers first have to show that they are relevant to some application domains—basically to be able to run some application or part of an application that is important. Then that application has to run faster, or handle bigger problems, or perhaps do something that you couldn’t do before. And finally, by comparison to existing applications, there are price/performance considerations.
It always comes back to the three questions:
- Can you run my application (or a part of it) or let me do something I couldn’t do another way?
- How fast?
- What does it cost?
While creating synthetic quantum benchmarks to explore the potential of a system is interesting, in the end, to be successful, quantum computers will have to run real applications or parts of them faster than an existing computer or enable new things to be done.
Once the hardware is in place what are the software challenges? How is D-Wave helping get developers on board to solve these software challenges in quantum computing?
First, I don’t think the hardware will be “in place” for a few years. I think the D-Wave architecture and the gate model architectures (and maybe others) will all still evolve. To me, it is sort of like it must have been in about 1955 in traditional computing as a variety of architectures and technologies were being explored. And with the 1955 idea, there were no off the shelf applications, no cell phone apps, no graphical displays, no high-level languages, no math libraries, etc. We are a little further than that in quantum computing, but it still is early in the software world.
No matter the hardware architecture and the technology used to create the qubits, interconnects, etc., quantum computing will require people to think about their applications differently and how they formulate their problem. And we’ll need more and better software tools and an environment to make that easier. We’ve basically been moving up the stack, creating some prototype tools, building on what we’ve learned that works well and what doesn’t. Others who might not be so far along with their hardware maturity have taken steps to define higher level software architectures. And the users are starting to contribute software tools to the environment, many of which are open sourced.
When you add it all up, I think the software environment will mature relatively quickly given the number of different people working on it and many of the things that we learned in traditional computing about software and architectures will be adapted for quantum computing. Open sourcing can help accelerate all of this work. More smart people!
How can developers get involved with quantum computing today?
Both easily and with some difficulty! The “easily” part is that they can run on our systems in the field or with our simulators, as well as on the IBM Quantum Experience and similar systems/simulators for other architectures. The “difficulty” part is that it does take some work to create quantum applications for either major architecture. As we talked before, one has to think about their application a little differently to be able to map it onto the machines.
What’s unique about probabilistic algorithms?
I’m not sure if it is the algorithms that are probabilistic, or the computers, or both! But, the result is that our machines are not deterministic like traditional computers. If you run a problem 100 times on a traditional computer, you should get the same answer all 100 times. On our machines, you will get a distribution of answers, and how those answers are concentrated depends on the “landscape” of your problem.
If your energy landscape is the equivalent of rugged, mountainous terrain with one low valley, and you run your problem 100 times, most of the answers will be in that low valley. However, it is likely that a few will be in the next lowest valley, a few in the next lowest, etc. But, if your energy landscape is more like the Bonneville Salt Flats, there are low “valleys” (low energy solutions actually) all over the place. You’ll get a wide distribution of answers.
So, the distribution and concentration of your answers is variable depending on the energy landscape and isn’t deterministic. The user (or her program on a host computer) will have to deal with the variation of the answers depending on the problem she is trying to solve. Probably.
What quantum computing projects are you most interested in?
Two types. First, practical problems that “regular” people can understand, to show that quantum computers are sort of approachable. Volkswagen’s project to use our systems to optimize the flow of taxis from downtown Beijing to the airport is an example that you could explain to your parents, or someone in the checkout line at the supermarket. On the other hand, I also can’t wait for the first earth-shattering (or maybe multiverse-shattering) applications that use quantum systems to do something important that has never been done before.
You’re open about what the D-Wave machines both are and are not capable of. Have you always been that way? Why?
I’m not sure about “always,” but for as long as I can remember I haven’t been a big surprise person. However, I did manage to learn that good surprises are usually better than bad surprises. And working on early computer graphics and other applications, then operating systems, and eventually running the Computing Division at Los Alamos, there were lots of surprises. I always appreciate knowing the technical facts or reality, since you would encounter or figure it out eventually anyway. I also try to treat people as I like to be treated, so I figure that people would want to know what something can do and can’t do, just like I would.