top of page

An (IBM Eagle)-Eyed View of Quantum Hype, Tensor Networks, and Corporate Science

As quantum computing enters an era of increasing attention, funding, and growth, it is critical to understand that quantum hype can arise not only by overstating a quantum result but by underestimating a classical method for the same task. It can be challenging to balance quantum-classical comparisons well: in fact, one of the most prestigious scientific journals just published an article containing such a misleading claim from IBM Quantum,


The paper has a fairly straightforward argument: the authors computed a problem on one of IBM's quantum devices, specifically an ‘Eagle’ processor, and verified that their answers matched the expected result from standard methods of solving that problem. Then they considered a more complicated version of the problem - so complicated that there was no classical calculation to compare to the quantum result. That they were able to get a sensible-looking result from the quantum device is good evidence that quantum computers could begin doing novel research calculations in the next few years. The issue? Within two weeks of publications, experts who study classical solutions to this type of problem very emphatically established that they can very easily find classical solutions to the more complicated problem IQB Quantum used.


It is not unusual for experts in quantum hardware and quantum algorithms to have limited expertise with classical solutions. Nor it is wrong to focus on improving one technique for solving a problem and ignoring other approaches. However when an analysis is explicitly trying to compare methods, and is part of a corporate research program that could have allocated more time and resources to consult with experts in alternate methods, then rigorous quantum research is at risk of being contorted into quantum hype.


Understanding why quantum research is especially prone to this 'hype by omission', as exemplified by the ensuing debate around the IBM Quantum paper, gets to the heart of ongoing conversations in quantum computing about managing hype, navigating corporate science, and the importance of interdisciplinary collaboration. And to understand it, you actually need as much many-body physics as quantum computing.


If you only know one academic journal, it's probably Nature, which covers everything from anthropology to theoretical physics. It has a mandate to show off the world’s most innovative and significant discoveries, and an associated prestige. However, most Nature results aren’t ‘new’ to other physicists - before being submitted to a journal, most physics papers are posted on ArXiV, a Cornell University-hosted website. ArXiV serves a few purposes – it ensures that unofficial 'pre-print' versions of papers are accessible for free; it makes results available to other scientists months or years before a paper has passed peer review; and it serves as an unofficial peer review.


A research team at IBM Quantum did not take this opportunity, but within a fortnight of the paper’s publication, three research groups with contrasting results did - see Tindall Et al. Begušić and Chan, and Kechedzhi Et al.


Where this gets more complicated is that the ArXiv pre-prints don’t affect the central result of the IBM Quantum paper. The Eagle processor really can measure some properties of a comparatively large and common many-body physics model. But the paper also claims the Eagle produces results in a range where classical methods of computing these properties break down, and the classical calculations they use require supercomputers. As the ArXiV pre-prints demonstrate, one can actually use more sophisticated tensor network methods to perform these calculations, to better precision than the Eagle processor’s result. On a laptop. In as little as seven minutes. In fact, tensor networks can simulate the Eagle processor itself.


So, what happened? Why make a bold, easily disprovable claim when you already have a respectable result? Why did IBM care so much about improving over other methods?

Well right now, it is extremely difficult to find any example where a quantum device can perform a calculation that a classical computer cannot. One reason for this is that if a quantum researcher comes up with a carefully engineered problem simple enough to run on contemporary quantum hardware, those simplifications might have been used to speed up a classical algorithm for the same problem. This includes the Google quantum supremacy experiment, which was recently challenged by a classical algorithm, see Ahranov Et al. Within both academia and industry, it's hard to overstate just how big of a deal it would be to prove that a quantum device has solved a useful problem that a classical computer can’t. To be fair to quantum devices, classical computer science is a much older and better-developed field, and quantum tech will need a few decades to ‘catch up’. To be fair to classical devices, right now classical devices are just better at everything, including measuring properties of a quantum system.


The Problem at Hand

Here, 'properties of a quantum system' means a quantity like the lowest energy that a system can have or the magnetization within a part of the system. Maybe the system describes a new material you want to understand or a molecule with applications in pharmacology - this is a familiar problem across chemical physics, materials science, and many-body physics, and has both scientific and technological applications.

But before you’re able to pick a strategy to compute your measurement, you need a mathematical model that describes the system. For example, you can assume that your particles are within some number of ‘sites’ and that each site can be in one of two configurations – the ‘up’ state or the ‘down’ state. Assume all your sites are arranged in a line/grid/cube/higher dimensional lattice.


Each particle in your model interacts with an external magnetic field and the particles directly surrounding it, so we include a mathematical operation for each in our model. Different combinations of “up” and “down” states at each site will result in the system having some particular energy, magnetization at a site, and total magnetization. This is called the Ising model, and for the rest of this article when I say ‘measure a system’ I mean ‘determine the magnetization at a site for a system acting as the Ising model’. This is an extremely common model in many-body physics and was the focus of IBM’s experiment.


If your model is complicated enough, a trained human with a pen and paper can’t be realistically expected to actually calculate the solution. There are two techniques for solving such problems that matter in this article - you can use a tensor network, or you can (try to) use a quantum computer.


Tensor networks have been around since the 1980s and they remain impressively competitive. If you need to compute measurements of a quantum system and you don’t know where to start? Start with a tensor network.


At a very superficial level, tensor networks describe the mathematical model in a ‘modular’ way so that is it easier to compute the measurements we want. It's not hard to see that the Ising model can be broken up into a site-by-site description if you keep track of all the sites and the operations that act on them. Because some operators act on more than one site, we say that sites next to each other are ‘correlated’ and we keep track of the correlations too.


We can draw tensor networks as diagrams, and use them as a visual shorthand for all the mathematical operations in a model, which is an exact description of the system. You may be familiar with other visual counting tools, like using tally marks to keep track of counting, or how elementary school students are taught to draw shapes to visualize multiplication. However, we can also use tensor networks to approximate a system, by setting a numeric limit on the amount of correlation between sites and then working out a simplified model. This is where they become powerful because those simplifications allow us to compute measurements beyond what could be analytically handled.


Intuitively, a good tensor network approximation contains those operations that have the strongest effect on a system, 'throwing away' less meaningful operations. And the error in these methods can usually be bounded, which essentially means we know ‘how wrong’ the tensor network calculation can be.


But tensor networks aren’t magic; there are some systems ill-suited to tensor network techniques, and if we need a very good approximation for a very large and connected system, then a great deal of computational time and storage can be required.


Back to Quantum Computers

So instead, maybe you could instead build the system you want to study on a quantum device and then measure whatever property you’re interested in. However, quantum devices that exist today are small and accumulate errors, commonly called the device ‘noise’. If we can solve both problems – which are extremely active areas of research in academia and industry – then quantum computers could model physical systems of the size and complexity that tensor networks struggle with. This idea isn’t new, in fact, it's been one of the strongest motivating factors for building quantum computers.


In the meantime, the large noise and small size of quantum devices make it very challenging to perform these measurements and then repeat the measurement often enough to be confident the result is correct. IBM managed to do both on a device with 127 qubits, which is an impressive technological development. The paper is even internally careful to temper its enthusiasm and doesn't explicitly claim quantum advantage. But their tensor network comparisons used insufficiently sophisticated tensor networks, and so misrepresented the limits of the classical methods.


From a research perspective, all of this is neutral or good news. Experts can quickly mobilize to detect poorly contextualized results, we have truly impressive tensor network techniques, and a quantum computer did something cool. When we study quantum hardware, we learn more about materials science and engineering. And in the meantime, the methods classical computer scientists develop to compete with quantum ideas can be used for research problems. The authors of IBM's paper definitely recognize this, noting that "these results will motivate and help advance classical approximation methods as both approaches serve as valuable benchmarks of one another", on which point they were extremely correct.


But if you’re investing millions in quantum hardware, and selling access to quantum devices (as IBM does), then you want to be able to demonstrate to clients and investors that the devices can produce new or faster results. In an industry setting, "Hey we did something that is really novel for a quantum device, but that a fourth grader could compute in their head", such as Shore's algorithm demonstrations, is not a compelling reason to purchase access to a quantum computer.

Navigating Corporate Science

As it stands now, IBM gets to keep a Nature headline with an investor-friendly claim, whereas the tensor network preprints won’t get much attention outside of academia. For this reason, it is unfortunate that IBM’s paper appeared in Nature before passing the sanity check of an ArXiV posting - otherwise, researchers probably would have been alerted to those better tensor network strategies, and the resulting paper would have a less charismatic result. There are a few reasons a research group might need to avoid pre-prints - for example, they may have opted to go through a double-blind peer review process. However, looking to the future, it is worth emphasizing that rushing to publication could become a savvy business move for IBM.


Nature also bears some responsibility for (presumably) not seeking out tensor network expertise in the peer-review process, but to date, no correction has been issued by the journal. It is admittedly quite difficult to find reviewers who can evaluate both classical and quantum results, but in the next decades, it will be increasingly more important to standardize cross-community review processes for quantum computing papers comparing their results to classical methods.


Corporate science will always have a messy line to navigate between careful and thorough research (which to be clear, IBM's research team did), and industry-friendly, time-sensitive results (which was the resulting publication). Industry will likely continue to have a large role in quantum computing research, including funding Ph.D. projects, directing research programs, and building quantum hardware. So, quantum information researchers had better get very sophisticated at identifying functional ethical boundaries within industry-academic collaborations.


An underrated point is that since we all know titles and abstracts will be the most read part of any publication, these elements should have the least hype. For example, the Eagle experiment did not produce evidence of utility, they found evidence of accuracy, in their carefully limited model. This doesn't really matter if only physicists are reading your paper, but in the context of a high-growth, high-investment field, distinguishing between an accurate method and a practical one becomes more important. When a paper is affiliated with a company with a huge incentive to claim their quantum computer can do something novel, then the careful tempering of a publication is not what investors and the public interact with - the paper title, abstract, and popular summaries like this Nature summary are.


But perhaps the most important cultural shift in quantum industry would be to emphasize that classical-quantum competition is not a zero-sum game. Investigating quantum phenomena has been a fruitful way to push the limits of classical computing for decades, and regardless of whether an algorithm runs on a quantum device, or is a classical 'quantum-inspired' algorithm, the end result can still be a useful new tool. As such, collaborations with computer scientists and other specialists will remain crucial to contextualizing quantum results. And as always, it is key to think carefully about who applies and makes strong claims about a computational tool, as well as the hardware and theory which underlie it.


Notes

1) Here, 'many-body physics' means the study of a system with many interacting particles, aka many separate 'bodies'.

2) Aharonov Et al. found a classical algorithm that can perform random circuit sampling (the Google supremacy task) in polynomial time, so it is technically efficient. However, this time is a function of an extremely high degree polynomial, so the quantum results could still be practically advantageous.

3) In Nature, authors can choose to keep their names from reviewers to avoid bias. If IBM Quantum chose to do this, then they might have avoided sharing the draft publicly until peer review was complete.

126 views

Recent Posts

See All
bottom of page