The Route to Robust Quantum Computing: Interview with Shruti Puri
Published March 23, 2021
Quantum computing is a radically new way to store and process information based on the principles of quantum mechanics. While conventional computers store information in binary “bits” that are either 0s or 1s, quantum computers store information in quantum bits, or qubits. A qubit can be both 0 and 1 at the same time, and a series of qubits together remember many different things simultaneously.
Everyone agrees on the huge computational power this technology may bring about, but why are we still not there yet? To understand the challenges in this field and its potential solutions, we recently interviewed Shruti Puri, PhD, who works at the frontier of this exciting field. Puri is an Assistant Professor in the Department of Applied Physics at Yale University, and a Physical Sciences & Engineering Finalist of the 2020 Blavatnik Regional Awards for Young Scientists, recognized for her remarkable theoretical discoveries in quantum error correction that may pave the way for robust quantum computing technologies.
What is the main challenge you are addressing in quantum computing?
Thanks to recent advances in research and development, there are already small to mid-sized quantum computers made available by big companies. But these quantum computers have not been able to implement any practical applications such as drug and materials discovery. The reason is that quantum computers at this moment are extremely fragile, and even very small noise from their working environment can very quickly destroy the delicate quantum states. As it is almost impossible to completely isolate the quantum states from the environment, we need a way to correct quantum states before they are destroyed.
At a first glance, quantum error correction seems impossible. Due to the measurement principle of quantum mechanics, we cannot directly probe a quantum state to check if there was an error in it or not, because such operations will destroy the quantum state itself.
Fortunately, in the 1990s, people found indirect ways to faithfully detect and correct errors in quantum states. They are, however, at a cost of large resource overheads. If one qubit is affected by noise, we have to use at least five additional qubits to correct this error. The more errors we want to correct, the larger number of additional qubits it will consume. A lot of research efforts, including my own, are devoted to improving quantum error correction techniques.
What is your discovery? How will this discovery help solve the challenge you mention above?
In recent years, I have been interested in new qubit designs that have some in-built protection against noise. In particular, I developed the “Kerr-cat” qubit, in which one type of quantum error is automatically suppressed by design. This reduces the total number of quantum errors by half! So, quantum computers that adopt Kerr-cat require far fewer physical qubits for error correction than the other quantum computers.
Kerr-cat is not the only qubit with this property, but what makes the Kerr-cat special is that it is possible to maintain this protection while a user tries to modify the quantum state in a certain non-trivial way. As a comparison, for ordinary qubits, the act of the user modifying the state automatically destroys the protection. Since its discovery, the Kerr-cat has generated a lot of interest in the community and opened up a new direction for quantum error correction.
As a theoretician, do you collaborate with experimentalists? How are these synergized efforts helping you?
Yes, I do collaborate quite closely with experimentalists. The synergy between experiments and theory is crucial for solving the practical challenges facing quantum information science. Sometimes an experimental observation or breakthrough will provide a new tool for a theorist with which they can explore or model new quantum effects. Other times, a new theoretical prediction will drive experimental progress.
At Yale, I have the privilege to work next to the theoretical group of Steve Girvin and the experimental groups of Michel Devoret and Rob Schoelkopf, who are world leaders in superconducting quantum information processing. The theoretical development of the Kerr-cat qubit was actually a result of trying to undo a bug in the experiment. Members of Michel’s group also contributed to the development of this theory. What is more, Michel’s group first experimentally demonstrated the Kerr-cat qubit. It was just an amazing feeling to see this theory come to life in the lab!
Are there any other experimental developments that you are excited about?
I am very excited about a new generation of qubits that are being developed in several other academic groups, which have some inherent protection against noise. Kerr-cat is one of them, along with Gottesman-Kitaev-Preskill qubit, cat-codes, binomial codes, 0−π qubit, etc. Several of these designs were developed by theorists in the early 2000s, and were not considered to be practical. But with experimental progress, these have now been demonstrated and are serious contenders for practical quantum information processing. In the coming years, the field of quantum error correction is going to be strongly influenced by the capabilities that will be enabled by these new qubit designs. So, I really look forward to learning how the experiments progress.
Interested in the latest experimental developments in quantum computer design and architecture? Register for the webinar Scaling up: New Advances in Building Quantum Computers, hosted by the New York Academy of Sciences on April 7. Featured speakers of this webinar include Andrew Houck, PhD, Professor of Electrical Engineering at Princeton University and Deputy Director of the Co-design Center for Quantum Advantage, and Christopher Monroe, PhD, Professor of Electrical and Computer Engineering and Physics at Duke University and Director of the Duke Quantum Center.