Everyone who has had some introduction to quantum computing ought to be familiar with the concept of quantum computing simulators. These simulators are classical software that usually calculate what the results should be if a particular quantum algorithm could be executed on a small, fault-tolerant quantum computer. There are other types of simulators available, however, that provide other useful information, such as statevectors. It is important to note that we still need to build quantum computers, though, because classical memory limits the scope of the simulations that can be run.
Tensor networks are not far removed from this concept, especially when considering statevector simulators. Both seek to achieve a deeper understanding of complex quantum systems before measurements extract classical information from them. But where statevectors represent these systems with complex numbers, tensor networks represent them with graphs, which may be referred to as maps or diagrams in other literature.
Therefore, you have a bunch of objects which, with this application, are referred to as “tensors.” In other applications you may know them as nodes or vertices. These tensors are connected by “networks,” which you may otherwise know as lines, edges, or arcs. These networks, these connections, reveal the information we seek. They may reveal information about the objects, for example, or they may reveal information about the interactions between the objects.
For an explanation that gets a little more technical, without getting quite as technical as a paper, feel free to check out an AzoQuantum article titled “Quantum Tensor Networks: Foundations, Algorithms, and Applications.” Beyond the change in style, the article has a few accompanying illustrations.
With tensor networks quantum computing is not the only application. Specifically, tensor networks are finding usefulness in the field of artificial intelligence (AI). Even more specifically, they are finding usefulness in a subset of AI called machine learning, which, like quantum computing, counts knowledge of linear algebra as a prerequisite.
Quantum tensor networks are a tool for researchers to represent and evolve quantum states. Bra-ket notation and linear algebra matrices are usually introduced early, but tensor networks must not be forgotten. They offer a sort of bridge to quantum computing by solving certain problems more efficiently than classical computers otherwise could, albeit not as efficiently as future fault-tolerant quantum computers will be able to do.
The answers to a Stack Exchange Quantum Computing question “What can tensor networks mean for quantum computing?” explain it simply, and here’s an analogy derived from them. Consider linear algebra as the view outside a window. Tensor networks and quantum circuits can be thought of as two different windows viewing the same linear algebra landscape. The linear operations performed on that landscape can be viewed from both windows. Furthermore, the gardener can be given instructions from both windows.
And just like in a house, window styles can vary. Quantum circuits, for example, can be compiled and displayed as pulse schedules. However, that statement refers to highly-recognizable superconducting and ion trap quantum circuits. Photonic quantum circuits are noticeably different, and neutral atom quantum circuits have not been publicly revealed yet. In a similar fashion, there’s not just one layout of tensor network. More on that is forthcoming.
The types of problems that may be solved with our Julia-language GenericTensorNetworks.jl library include:
As we note on our “Quantum Algorithms vs. Quantum-Inspired Algorithms” page, which links to an article delving deeper into this, it’s always worth stressing that tensor networks are still classical algorithms. While they may push back the problem size at which quantum computers will become advantageous, they grow in complexity as they do and they still face certain limitations. There will still be classically intractable problems that we’ll need real quantum computers to solve.
As already mentioned, there are multiples types of tensor networks, each with its own advantages, disadvantages, and use cases. A sampling of these tensor network methods includes:
Documentation to get started with tensor networks can be found on our “Getting Started” page. For that matter, there are also links to webinars, documentation, notes, samples, blog posts, and publications that can help you get started performing computation with our 256-qubit “Aquila” neutral atom device.
For any exploration of tensor networks, neutral atoms are a natural modality for upgrading to quantum computation. The tensors are represented by the individual Rubidium atoms, which are arranged geometrically according to the associated map. The networks of the map are rendered by virtue of the Rydberg blockade radius. In other words, a two-dimensional array of Rubidium atoms can physically represent a tensor network map, with all its tensors and networks. But as the problem size grows and classical memory constraints become prohibitive, the unconstrained neutral atom quantum processor should prove to be computationally advantageous.
This is not theoretical, by the way. Tensor networks with up to 256 tensors can be mapped today to the 256-atom “Aquila” quantum computer. Furthermore, this can operate in reverse. In fact, tensor networks are used to verify the fidelity of computation on the Aquila device. This is important as the technology develops and matures, because eventually classical computation of all types will fail to keep up. The tensor networks of today will provide assurances as to the fidelity of the quantum computation of the future.