Google Quantum A.I. Lab Update on the D-Wave 2 Quantum Computer

Over the last few days, much has been made about Google’s reported results of benchmarking problems run on the D-Wave 2 quantum computer. What is perplexing about many of these articles is the emotional tone. For some reason, it appears that some people have an ax to grind regarding D-Wave resulting in articles that are biased.

Today I read Google’s explanation of their tests and the results are far more nuanced than reported elsewhere. In short, problems ran both faster and slower on the D-Wave 2 versus highly optimized algorithms on classical computers. Google has an enormous number of test results that have not been completely analyzed. The problems they are running are artificial benchmarks, not messy real world problems. Google expects D-Wave 2 to improve even before the expansion of the number of qbits. Below, is an excerpt from “Where do we stand on benchmarking the D-Wave 2?“. I encourage interested readers to read the entire article.

A portfolio of custom solvers designed to beat the hardware on its own turf is competitive

So what do we get if we pit the hardware against these solvers designed to compete with the D-Wave hardware on its own turf? The following pattern emerges: For each solver, there are problems for which the classical solver wins or at least achieves similar performance. But the inverse is also true. For each classical solver, there are problems for which the hardware does much better.

For example, if you use random problems as a benchmark, the fast simulated annealers take about the same time as the hardware. See Figure 2 in the slideshow.

But importantly, if you move to problems with structure, then the hardware does much better. See Figure 3. This example is intriguing from a physics perspective, since it suggests co-tunneling is helping the hardware figure out that the spins in each unit cell have to to be flipped as a block to see a lower energy state.

But if we form a portfolio of the classical solvers and keep the best solution across all of them, then this portfolio is still competitive with the current version of the hardware. Again, a good example is the structured problem in Figure 3 in the slideshow. It slows down the annealers, but Alex Selby’s code has no problem with it and obtains the solution about as fast as the hardware does.[4]

Sparse connectivity is a major limitation

A principal reason the portfolio solver is still competitive right now is actually rather mundane — the qubits in the current chip are still only sparsely connected. As the connectivity in future versions of quantum annealing processors gets denser, approaches such as Alex Selby’s will be much less effective.

One indication that sparse connectivity is a culprit also comes from well-understood examples such as the “Hamming weight function with a barrier” problem — quantum annealing tackles it easily but classical annealing fails.[5] But we haven’t been able to implement such examples as benchmark problems yet because of the sparse connectivity.

There’s a list of other hardware aspects still limiting performance that future iterations will need to improve — reduced control errors, longer coherence times, error correction, richer non-stoquastic couplings between qubits, etc.

A big data approach may lead to new conclusions

So will we have to wait for the next generation chip with higher connectivity before we can hope to see the hardware outperform the portfolio solver? Until very recently we thought so. But remember that these latest benchmarking results were obtained from relatively small datasets — just 1000 instances in the ones that got recent attention.

It’s easy to make premature conclusions on such small sets, as there are not enough data points from possible subsets of problem instances that might indicate a speedup. Moreover, as several groups independently discovered, such random problems tend to be too easy and don’t challenge the quantum hardware or classical solvers.[6]

Ever since the D-Wave 2 machine became operational at NASA Ames, the head of our benchmarking efforts, Sergio Boixo, made sure we used every second of machine time to take data from running optimization problems. Simultaneously we gave the same problems to a portfolio of the best classical solvers we’re aware of. We now have data for 400,000 problem instances. This is the largest set collected to date, and it keeps growing.

Eyeballing this treasure trove of data, we’re now trying to identify a class of problems for which the current quantum hardware might outperform all known classical solvers. But it will take us a bit of time to publish firm conclusions, because as Rønnow et al’s recent work shows, you have to carefully exclude a number of factors that can mask or fake a speedup.

This entry was posted in Science_Technology and tagged , , . Bookmark the permalink.