Constant-Overhead Magic State Distillation
Magic-state distillation is a key protocol to perform fault-tolerant quantum computing. This process is deemed mandatory for performing fault-tolerant calculations that involve non-Clifford operations - that is, operations hard to simulate classically and necessary for universal computations. Yet this is a resource intensive protocol, with no strong demonstration on quantum hardware yet. In this work, authors propose a new protocol that optimizes the overhead for magic state distillation, concluding that polynomial time is achievable. This marks a concrete step forward in reducing the resource demands for fault-tolerant quantum computing.
Review of Distributed Quantum Computing. From single QPU to High Performance Quantum Computing
The future of quantum computing is expected to be distributed: nearly all architectures expect the need to connect multiple nodes for million-scale qubits, and the connection of quantum sensors to quantum processors is expected to bring up the real value of techniques in quantum machine learning, for example. This paper provides a comprehensive review and overview of the field, identifying the necessary components, challenges, and opportunities for distributed quantum computing.
{{Newsletter-signup}}
ReCon: Reconfiguring Analog Rydberg Atom Quantum Computers for Quantum Generative Adversarial Networks
Aquila’s capacity to do machine learning has just been expanding! Following up on QuEra’s results in reservoir computing, a new set of authors, now from Rice university, have demonstrated the ability to perform generative adversarial networks with Aquila. The new proposed and demonstrated protocol, ReCon, demonstrates performance on inference tasks 33% better than previous alternatives with superconducting devices. The capacity to leverage the exponential scaling of the Hilbert space to encode more features is called as one of the strengths of the protocol.
Compilation of Trotter-Based Time Evolution for Partially Fault-Tolerant Quantum Computing Architecture
The early days of error-corrected quantum computing are starting and, as so, algorithms are finally starting to show up that bridge the limitations of NISQ and high-demands of fault-tolerant systems. In this work, authors propose a methodology to efficiently compile partially fault-tolerant quantum computing protocols and demonstrate the value of their approach by showcasing optimistic overhead for performing quantum phase estimation in the 2D Hubbard model. This model is relevant for quantum materials simulation and the authors propose that 65,000 thousand qubits with physical error rates of 0.0001 should suffice to estimate ground state energies more efficiently than classical computers would on a sizable system.