Galicia Quantum Technologies Hub

CESGA proposes network architectures capable of rapidly scaling up quantum computing capabilities

By the time fully operational quantum computers exist, systems capable of connecting them to tackle tasks of unimaginable size and complexity will have been designed. CESGA aims to make a significant contribution to this challenge through its project to develop rapidly scalable network architectures for distributed quantum computing that enable to run and coordinate collective operations among multiple nodes

The 1960s saw the germ of one of the major milestones in computer technology: distributed computing. The ARPANET, popularly known as the Internet’s precursor, laid the foundations for multiple computers to connect and share resources.

Iago Fernández

From then on, the qualitative leap brought about by this model spurred the scientific and innovative efforts of the research community and the technology industry to exploit its full potential. Thanks to the distributed model, classical computing reached its highest levels of scalability, fault tolerance, efficiency in the use of resources, parallel processing capacity, flexibility, and cost-benefit. In short, it is the key to processing large volumes of data and performing complex calculations in most efficiently.

With this precedent, it is not surprising that one of the main lines of work of those who are laying the foundations of quantum technologies is the design of a distributed computing model. With this model, the equation is clearly promising: the exponential computing power inherent in quantum computing rises to heights that are still difficult to imagine. And so do its potential applications.

Within the framework of the Quantum Communications Complementary Plan (PCCC), the Galicia Supercomputing Center (CESGA) is working on the development of network architectures for distributed quantum computing that allow collective operations to be executed and coordinated among multiple nodes or quantum processing units (QPUs).

“It is necessary to design how these networks should be configured and what functions the devices that enable communications between the different nodes should have,” explains Iago Fernández Llovo, PhD in physics from the University of Santiago de Compostela and researcher at CESGA’s Department of Communications.

His project aims to contribute to the design of connections between quantum devices so that they work together to solve problems that cannot be tackled by a single computer, using a minimum number of operations, time, and resources. These types of operations allow quantum computers connected by the network to behave as a single system, “scalable, modular and reconfigurable depending on the application,” as Llovo specifies.

Entanglement exchange

“We are developing the means to perform collective operations between several QPUs when you have quantum network devices connecting them,” explains the CESGA researcher, who clarifies that, although the functions of these devices are not yet well established, they could be many more than those of their classical counterparts.

More functions, of greater complexity and also very different, because connecting and coordinating quantum computers in different physical locations is very different from doing so with classical computers. The rules governing the construction of the networks that link them and put them to work together are governed by the laws of quantum mechanics, and this poses unprecedented technological and algorithmic challenges.

The most widespread model proposed to make this possible is entanglement exchange, which seeks to distribute pairs of entangled qubits, known as Bell’s states, between distant –and not directly connected to each other– QPUs using the properties of quantum mechanics. In essence, it makes it possible to extend through a network the entanglement between qubits that have never interacted directly before, which is essential to build a DQC (distributed quantum computing) network).

The researchers at the CESGA propose, as a prototypical case, a network involving multiple QPUs connected exclusively by quantum channels to network devices –which, in turn, may or may not be connected to others at a higher layer– forming a kind of tree.

The technology used in quantum channel research is generally photonic and relies on performing a type of measurement known as Bell measurements to generate entanglement. “These operations are not perfect and high-purity quantum states are delicate and difficult to generate, so we need to simulate how these measurements take place and the physics to calculate fidelity losses,” Llovo clarifies.

Indeed, the generation of high-fidelity Bell pairs is a huge experimental challenge on which research groups from all over the world are working. Their results will generate the different pieces that make up this great puzzle.

The piece being designed at CESGA corresponds to the algorithmic part, to unravel how the operations are implemented once these Bell pairs are available. And this is where the team has to struggle with the constraints of current technology, which does not always keep pace with scientific advances. “In this case, the major constraint is that our work in DQC is based on simulations on conventional computers. These simulations are very costly in computational terms: a single additional qubit doubles the amount of RAM needed to simulate a quantum system, so it soon becomes impossible to simulate large quantum systems,” says Llovo. And this is not an issue with the Galician FinisTerrae only, because even the entire memory of the world’s most powerful supercomputer is not enough to simulate a quantum system of only 60 qubits.

The team at CESGA has proposed a network architecture that would enable quantum computing capabilities to scale rapidly. “We have developed a technique to perform collective operations using a router that essentially acts as the glue between different QPUs,” says Llovo. In essence, this will make it possible to enhance the connectivity between units within the network, requiring fewer quantum connections and conserving Bell pairs compared to other existing proposals.

Much more than the sum of the parts

The relevance of the results of this project can be measured in relation to the importance of scalability in quantum computing, which is key. To understand this, it must be taken into account that the power of a quantum computer increases exponentially with each additional qubit. A single qubit increase multiplies the amount of information the system can store and process simultaneously by a factor of two. Therefore, when it becomes a reality, a distributed quantum computer will have all the qubits of the individual nodes, but, thanks to their joint operation, the computing power will be far greater than that of the sum of the parts.

This will enable the resolution of problems of enormous size and complexity, which are currently beyond the capabilities of even the most advanced supercomputers. However, CESGA researchers caution that the practical applications of these results will not become apparent until the medium to long term. “Our proposal enhances current knowledge and allows us to envision what a quantum data center will look like once Bell pair generation and error correction technologies have matured,” explains Fernández Llovo. While these technologies are not yet available, their development is only a matter of time, with numerous research groups across the globe working on them.

The quantum future

“The average person may never see or use a quantum computer. These devices will not replace conventional computers, but they will accelerate computations to the point of enabling tasks that would take longer than the age of the universe on today’s most powerful supercomputers to be completed on a human time scale, ranging from seconds to months.”

This is Iago F. Llovo’s vision of the future role of quantum computing, acknowledging that its impact will be highly significant in areas that directly affect everyone. He particularly emphasizes its potential in revolutionizing materials science and the development of new drugs, while also highlighting its usefulness in fields like financial market simulation and the coordination and forecasting of electrical systems.

The race to achieve this is both long and complex. While the major players in technological development are making significant strides to bring these devices to market, there remains a critical need to develop a single quantum computer that is either immune to errors or capable of correcting them.

Simultaneously, experimental work is underway to connect different quantum computers. “Until then, we won’t see fully distributed quantum computers, but we are building the foundational elements, the science that will lead to that development. This is why it’s crucial for basic research to receive the time and funding it needs to generate the impact and progress it deserves,” concludes Llovo.


 

Scroll to Top