New Limited Forms of Quantum Computation could beat classical systems for some computation

Four separate teams have taken a step toward achieving such “quantum speed-up” by demonstrating a simpler, more limited form of quantum computing that, if it can be improved, might soon give classical computers a run for their money. But don’t get your hopes up for a full-fledged quantum computer. The gizmos may not be good for much beyond one particular calculation.

The four groups have now demonstrated a more-limited type of quantum computation that might be developed more quickly. They all use photons, quantum particles of light, that run through a maze of crisscrossing optical channels. At the intersections, the photons can change paths with certain probabilities. In all of the experiments, three photons enter and exit through either five or six ports. The task is to calculate the probabilities for the photons to come out various combinations of output ports.

“The question is, does this give you a first step to doing a hard calculation quantum mechanically, and it looks like it might,” says Scott Aaronson, a theoretical computer scientist at the Massachusetts Institute of Technology (MIT) in Cambridge and an author on one of the papers.

The Computational Complexity of Linear Optics [Theory of Computation journal]

The current experiments use such a small number of photons that it would take a standard laptop a fraction of a second to make the same calculation. In contrast, the experiments themselves can still take hours. But if the work can be scaled up to about 25 photons and 400 channels then the classical computer should start to fall behind the experiment, Walther estimates. “In 10 years or so you may be able to use existing technology and resources to outperform a conventional computer,” he says.

However, it’s not clear that such an effort will work, says John Preskill, a theorist at the California Institute of Technology in Pasadena. A bigger optical circuit would be more susceptible to effects such as the absorption of photons within the circuit and optical noise that could distort the results, Preskill notes. Ironically, accounting for those imperfections could make modeling the circuits easier, not harder, and allow the computer to keep up, Preskill says.

The Computational Complexity of Linear Optics Abstract: We give new evidence that quantum computers—moreover, rudimentary quantum computers built entirely out of linear-optical elements—cannot be efficiently simulated by classical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions. Our first result says that, if there exists a polynomial-time classical algorithm that samples from the same probability distribution as a linear-optical network, then P^#P =BPP^NP , and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation. Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the Permanent-of-Gaussians Conjecture, which says that it is #P-hard to approximate the permanent of a matrix A of independent N(0;1) Gaussian entries, with high probability over A; and the Permanent AntiConcentration Conjecture, which says that jPer(A)j p n!=poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application. This paper does not assume knowledge of quantum optics. Indeed, part of its goal is to develop the beautiful theory of noninteracting bosons underlying our model, and its connection to the permanent function, in a self-contained way accessible to theoretical computer scientists.

Photonic Boson Sampling in a Tunable Circuit

Photonic Boson Sampling in a Tunable Circuit

Quantum computers are unnecessary for exponentially efficient computation or simulation if the Extended Church-Turing thesis is correct. The thesis would be strongly contradicted by physical devices that efficiently perform tasks believed to be intractable for classical computers. Such a task is boson sampling: sampling the output distributions of n bosons scattered by some linear-optical unitary process. Here, we test the central premise of boson sampling, experimentally verifying that 3-photon scattering amplitudes are given by the permanents of submatrices generated from a unitary describing a 6-mode integrated optical circuit. We find the protocol to be robust, working even with the unavoidable effects of photon loss, non-ideal sources, and imperfect detection. Scaling this to large numbers of photons will be a much simpler task than building a universal quantum computer.

Experimental Boson Sampling

Arxiv – Experimental Boson Sampling

Universal quantum computers promise a dramatic speed-up over classical computers but a full-size realization remains challenging. However, intermediate quantum computational models have been proposed that are not universal, but can solve problems that are strongly believed to be classically hard. Aaronson and Arkhipov have shown that interference of single photons in random optical networks can solve the hard problem of sampling the bosonic output distribution which is directly connected to computing matrix permanents. Remarkably, this computation does not require measurement based interactions or adaptive feed-forward techniques. Here we demonstrate this model of computation using high{quality laser{written integrated quantum networks that were designed to implement random unitary matrix transformations. We experimentally characterize the integrated devices using an in{situ reconstruction method and observe three-photon interference that leads to the boson-sampling output distribution. Our results set a benchmark for quantum computers, that hold the potential of outperforming conventional ones using only a few dozen photons and linear-optical elements.

Experimental boson sampling in arbitrary integrated photonic circuits

Experimental boson sampling in arbitrary integrated photonic circuits

Photons naturally solve the BosonSampling problem: sample the outputs of a multiphoton experiment in a linear-optical interferometer. This is strongly believed to be hard to do on a classical computer, and motivates the development of technologies that enable precise control of multi-photon interference in large interferometers. Here we report multi-photon experiments in a 5-mode integrated interferometer. We use novel three-dimensional manufacturing techniques to achieve simultaneous control of 25 independent parameters that describe an arbitrary interferometer. We characterize the chip using one- and two-photon experiments, and confirm the quantum mechanical predictions for threephoton interference. Scaled up versions of this setup are the most promising way to demonstrate the computational capability of quantum systems, and may have applications in high-precision measurements and quantum communication

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks