Geordie Rose has recently discussed Computional Universality and the Dwave Quantum computers.
There’s another concept that has been recently introduced which is called a universal quantum computer. Such a machine is capable of simulating all quantum physics, which presumably includes as a subset all of classical physics.
D-Wave’s current processor architecture is classically universal (can do anything a classical computer can do), but not quantumly universal, although it’s really easy to make it so (just add a new coupling device) if some day it turns out there’s a good reason to do this.
If someone were to plop down a perfect universal quantum computer with a hundred trillion logical qubits on my desk right now, there’s only one useful thing we would know what to use it for — quantum annealing. Faster and better optimization and lightning quick Boltzmann machines A Boltzmann machine is a type of stochastic recurrent neural network).
Dwave is on track to ten thousand qubits by about 2017.
A chart seems to show that as qubits are added the solving time stays at about 1 second
So it will be best to feed future quantum computers with hard problems that scale rapidly to times over one hour to years to impossible solve with classical computers. Even if quantum computers prove to be able to solve any problem that can be expressed and loaded into 8000 qubits or so faster than any computational system.
There are adiabatic quantum algorithms that are neuromorphic
* Quantum systems will be useful for breaking down hard problems and providing the proven solved answers as saved solutions for classical systems
* Once the 2000-10000 qubit systems prove massive speedup versus any supercomputer for certain classes of useful problems, then there will be a lot of sales and a lot more investment in quantum computers. This could mean say a few billion dollars to rapidly scale the Dwave's superconducting adiabatic system to a full wafer of qubits using more advanced lithography. This would still likely take a few years. I think this would be a leap up to about a few million qubits.
There are other approaches to quantum computers using quantum dots which would likely have even greater scaling potential.
The number of qubits will still be a limiting factor in where the quantum computers are used. Certain algorithms could have theoretical speedup but may not be useful until there are billions or trillions of qubits.
An example is an adiabatic version of Grovers algorithm could enable database searchers in the square root of N. This would be N the number of qubits and the number of things being searched.
There are ways to get around limitations in qubits by mathetically breaking a larger problem into sub-problems that need fewer qubits and solving the sub-problems serially.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks