Google Using Dwave Systems Quantum Computer as a binary classifier of images

Google is researching Quantum Computer Algorithms using Dwave Systems Quantum Computers for a binary classifier of images

At the Neural Information Processing Systems conference (NIPS 2009), we show the progress we have made. We demonstrate a detector that has learned to spot cars by looking at example pictures. It was trained with adiabatic quantum optimization using a D-Wave C4 Chimera chip. There are still many open questions but in our experiments we observed that this detector performs better than those we had trained using classical solvers running on the computers we have in our data centers today. Besides progress in engineering synthetic intelligence we hope that improved mastery of quantum computing will also increase our appreciation for the structure of reality as described by the laws of quantum physics.

A new type of machine, a so-called quantum computer, can help here. Quantum computers take advantage of the laws of quantum physics to provide new computational capabilities. While quantum mechanics has been foundational to the theories of physics for about a hundred years the picture of reality it paints remains enigmatic. This is largely because at the scale of our every day experience quantum effects are vanishingly small and can usually not be observed directly. Consequently, quantum computers astonish us with their abilities. Let’s take unstructured search as an example. Assume I hide a ball in a cabinet with a million drawers. How many drawers do you have to open to find the ball? Sometimes you may get lucky and find the ball in the first few drawers but at other times you have to inspect almost all of them. So on average it will take you 500,000 peeks to find the ball. Now a quantum computer can perform such a search looking only into 1000 drawers. This mind boggling feat is known as Grover’s algorithm.

Over the past three years a team at Google has studied how problems such as recognizing an object in an image or learning to make an optimal decision based on example data can be made amenable to solution by quantum algorithms. The algorithms we employ are the quantum adiabatic algorithms discovered by Edward Farhi and collaborators at MIT. These algorithms promise to find higher quality solutions for optimization problems than obtainable with classical solvers.

Training a Large Scale Classifier with the Quantum Adiabatic Algorithm (14 page pdf)

In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.

NIPS 2009 Demonstration: Binary Classification using Hardware Implementation of Quantum Annealing (19 page pdf)

Previous work [NDRM08, NDRM09] has sought the development of binary classifiers that exploit the ability to better solve certain discrete optimization problems with quantum annealing. The resultant training algorithm was shown to offer benefits over competing binary classifiers even when the discrete optimization problems were solved with software heuristics. In this progress update we provide first results on training using a physical implementation of quantum annealing for black-box optimization of Ising objectives. We successfully build a classifier for the detection of cars in digital images using quantum annealing in hardware. We describe the learning algorithm and motivate the particular regularization we employ. We provide results on the efficacy of hardware-realized quantum annealing, and compare the final classifier to software trained variants, and a highly tuned version of AdaBoost

We test QBoost to develop a detector for cars in digital images. The training and test sets consist of 20 000 images with roughly half the images in each set containing cars and the other half containing city streets and buildings without cars. The images containing cars are human-labeled ground truth data with tight bounding boxes drawn around each car and grouped by positions (e.g. front, back, side, etc.) For demonstration purposes, we trained a single detector channel only using the side-view ground truth images. The actual data seen by the training system is obtained by randomly sampling subregions of the input training and test data to obtain 100 000 patches for each data set. Before presenting results on the performance of the strong classifier we characterize the optimization performance of the hardware.

Previous experiments on other data sets using software heuristics for QUBO solving have shown that performance beyond AdaBoost can typically be obtained. Presumably the improvements are due to the explicit regularization that QBoost employs. We would hope that QBoost can be made to outperform even the highly tuned boosting benchmark we have employed here.

We mention that the experiments presented here were not designed to test the quantumness of the hardware. Results of such tests will be reported elsewhere.