Dwave Quantum annealing compared to other algorithms and their approach versus cleaner qubits

Scott Aaronson discusses the recent Dwave – Google Quantum conputer work.

In computer science, normally we care about asymptotic speedup: We care about, “What is your running time as a function of the size of the problem? Does it grow linearly? Does it grow quadratically?” The constant that’s in front — Does it take 5N steps? Does it take 10N steps? — we don’t care that much about. We just care that it’s linear in N.

In the Google paper, they discuss two classical algorithms that do match the asymptotic performance — and one of them beats the real-world performance — of the D-Wave machine. So besides simulated annealing, there are two more classical algorithms that are actors in this story. One of them is quantum Monte Carlo, which is actually a classical optimization method, but it’s one that’s inspired by quantum mechanics.

In this new Google paper, they say that even though quantum Monte Carlo has the same asymptotic performance, the constant is way, way better for the D-Wave machine. The constant is about 100 million times better.

There are two huge issues that I [Scott Aaronsno] would have with that. The first issue is that the problem instances where the comparison is being done are basically for the problem of simulating the D-Wave machine itself. There were $150 million dollars that went into designing this special-purpose hardware for this D-Wave machine and making it as fast possible. So in some sense, it’s no surprise that this special-purpose hardware could get a constant-factor speedup over a classical computer for the problem of simulating itself.

The qubits in their chip are organized in this particular graph topology. If you want to solve a practically important optimization problem, you need to map it somehow onto that topology. And there’s always a loss when you do that mapping. It seems entirely possible that that loss would kill a constant-factor advantage.

But the other point is that there’s yet another classical algorithm on the stage, which is Selby’s algorithm. It’s a local-search algorithm, but it’s one that is able to figure out that the qubits are organized into these clusters. What the Google paper finds is that Selby’s algorithm, which runs on a classical computer, totally outperforms the D-Wave machine on all the instances they tested.

The Selby algorithm program and results are at Github

If you know that these eight qubits form a cluster, and you should be thinking of them as one giant variable, then you just find the best setting of that variable. There are only 256 — 2 to the 8th — cases to check. That you can do pretty quickly.

If the clusters were 800 bits, then you wouldn’t be able to do this. On the other hand, building 800 qubits that are all talking to each other is a super-hard engineering problem. And even if you did [build those qubit clusters], it’s not at all clear that quantum annealing would be able to tunnel over that.

Quantum annealing does best when there’s a tall, thin potential barrier.

Will DWave’s dirty qubit approach win out over efforts to develop cleaner qubits. This will be determined later.