Tensor Networks
Everyone who has had some introduction to quantum computing ought to be familiar with the concept of quantum computing simulators. These simulators are classical software that usually calculate what the results should be if a particular quantum algorithm could be executed on a small, fault-tolerant quantum computer. There are other types of simulators available, however, that provide other useful information, such as statevectors. It is important to note that we still need to build quantum computers, though, because classical memory limits the scope of the simulations that can be run.
Tensor networks are not far removed from this concept, especially when considering statevector simulators. Both seek to achieve a deeper understanding of complex quantum systems before measurements extract classical information from them. But where statevectors represent these systems with complex numbers, tensor networks represent them with graphs, which may be referred to as maps or diagrams in other literature.
Therefore, you have a bunch of objects which, with this application, are referred to as “tensors.” In other applications you may know them as nodes or vertices. These tensors are connected by “networks,” which you may otherwise know as lines, edges, or arcs. These networks, these connections, reveal the information we seek. They may reveal information about the objects, for example, or they may reveal information about the interactions between the objects.
For an explanation that gets a little more technical, without getting quite as technical as a paper, feel free to check out an AzoQuantum article titled “Quantum Tensor Networks: Foundations, Algorithms, and Applications.” Beyond the change in style, the article has a few accompanying illustrations.
With tensor networks quantum computing is not the only application. Specifically, tensor networks are finding usefulness in the field of artificial intelligence (AI). Even more specifically, they are finding usefulness in a subset of AI called machine learning, which, like quantum computing, counts knowledge of linear algebra as a prerequisite.
What are Tensor Networks in Quantum Computing?
Quantum tensor networks are a tool for researchers to represent and evolve quantum states. Bra-ket notation and linear algebra matrices are usually introduced early, but tensor networks must not be forgotten. They offer a sort of bridge to quantum computing by solving certain problems more efficiently than classical computers otherwise could, albeit not as efficiently as future fault-tolerant quantum computers will be able to do.
The answers to a Stack Exchange Quantum Computing question “What can tensor networks mean for quantum computing?” explain it simply, and here’s an analogy derived from them. Consider linear algebra as the view outside a window. Tensor networks and quantum circuits can be thought of as two different windows viewing the same linear algebra landscape. The linear operations performed on that landscape can be viewed from both windows. Furthermore, the gardener can be given instructions from both windows.
And just like in a house, window styles can vary. Quantum circuits, for example, can be compiled and displayed as pulse schedules. However, that statement refers to highly-recognizable superconducting and ion trap quantum circuits. Photonic quantum circuits are noticeably different, and neutral atom quantum circuits have not been publicly revealed yet. In a similar fashion, there’s not just one layout of tensor network. More on that is forthcoming.
The types of problems that may be solved with our Julia-language GenericTensorNetworks.jl library include:
- Binary paint shop, inspired by the automotive industry, which seeks to minimize the number of color changes that need to occur in order to paint every car in an assembly line two colors
- Coloring, whether vertex, edge, or face coloring, which, according to constraints, assigns labels to graph elements in the form of coloring those elements
- Cutting, which refers to meeting the requirements of some problem, perhaps in manufacturing, while minimizing any potential wastes resulting from the process
- Dominating set, which determines if there is a set within a graph for which every vertex is either in the set or is otherwise adjacent to at least one vertex in the set
- Hyper spin glass, which is the spin glass problem applied to hypergraphs, which are graphs in which hyperedges connect multiple vertices, in contrast to edges which connect exactly two
- Independent set, which is any set of vertices in a graph for which none of the vertices in the set are connected by an edge; independent sets may also be referred to as stable sets
- Maximal independent set (MIS), which seeks to find independent sets that cannot be extended with additional vertices without violating the constraint on non-adjacent vertices
- Satisfiability, which determines if a solution can be found to a Boolean formula, regardless of its complexity, such that it evaluates as TRUE
- Set covering, which identifies a collection of elements, and which seeks to find the minimum number of sets that include, aka cover, the entire collection
- Set packing, which seeks to determine if there are any subsets within a set for which neither subset shares the same element
- Spin glass, which seeks to find the minimum energy of a classical spin system that has fully-randomized spins and for which the couplings are also fully randomized
- Vertex matching, which seeks to find sets of edges for which none of the connected vertices are shared by more than one edge
As we note on our “Quantum Algorithms vs. Quantum-Inspired Algorithms” page, which links to an article delving deeper into this, it’s always worth stressing that tensor networks are still classical algorithms. While they may push back the problem size at which quantum computers will become advantageous, they grow in complexity as they do and they still face certain limitations. There will still be classically intractable problems that we’ll need real quantum computers to solve.
Types of Tensor Networks
As already mentioned, there are multiples types of tensor networks, each with its own advantages, disadvantages, and use cases. A sampling of these tensor network methods includes:
- Matrix Product States (MPS), which represent the ground states of one-dimensional systems; they are called Tensor Trains (TT) in mathematics and computer science
- Matrix Product Operators (MPO) differ from MPS in that instead of representing states in one dimension, they represent the quantum operators
- Projected Entangled Pair States (PEPS) differ from MPS in that they represent quantum states in two dimensions instead of only one
- Tree Tensor Networks (TTN) is a subset of Tensor Network States (TNS) that can be used to simulate systems exceeding 1,000 dimensions; they are called Hierarchical Tucker (H-Tucker) in mathematics
- Multi-scale Entanglement Renormalization Ansatz (MERA) adds an extra dimension that represents scale, or length, allowing it to represent the ground states of spin chains
Documentation to get started with tensor networks can be found on our “Getting Started” page. For that matter, there are also links to webinars, documentation, notes, samples, blog posts, and publications that can help you get started performing computation with our 256-qubit “Aquila” neutral atom device.
For any exploration of tensor networks, neutral atoms are a natural modality for upgrading to quantum computation. The tensors are represented by the individual Rubidium atoms, which are arranged geometrically according to the associated map. The networks of the map are rendered by virtue of the Rydberg blockade radius. In other words, a two-dimensional array of Rubidium atoms can physically represent a tensor network map, with all its tensors and networks. But as the problem size grows and classical memory constraints become prohibitive, the unconstrained neutral atom quantum processor should prove to be computationally advantageous.
This is not theoretical, by the way. Tensor networks with up to 256 tensors can be mapped today to the 256-atom “Aquila” quantum computer. Furthermore, this can operate in reverse. In fact, tensor networks are used to verify the fidelity of computation on the Aquila device. This is important as the technology develops and matures, because eventually classical computation of all types will fail to keep up. The tensor networks of today will provide assurances as to the fidelity of the quantum computation of the future.