WeeklyWorker

07.01.2021

From copy to overtake

China has taken a technological lead in quantum computing. Yassamine Mather says this could well impact on the military balance of power

In considering changes in the dominance of any particular country, it is necessary to take into account its economic, technological and military power.

There is little doubt that since 1945 the USA has been and remains the global power, but in one aspect - technology - China is moving forward at a great pace and after decades of copying and emulating the technology of the hegemon (at times making limited additions and enhancements), it is finally approaching a situation where it is likely to overtake the USA in several respects.

The progress could be said to have started in the field of high-performance computing (HPC) and has now extended to quantum computing. HPC is increasingly important in all areas of research, from medical science to artificial intelligence. For decades it was the domain of theoretical scientists, together with computer and software developers. It was used in modelling complex physical phenomena, such as weather, fluid dynamics, molecular interactions, astronomical calculations and engineering design. However, nowadays it is an essential research tool in many, perhaps surprising areas.

The ability to collect big data increases the need to be able to analyse that data, and this is an area where HPC can be a most useful tool. Recently it has been used by researchers in pharmacology, social sciences, semantics, geology, archaeology, urban planning, graphics, genomics, brain imaging, economics, game design and even music. Machine learning and artificial intelligence rely heavily on high-performance computing.

The main benefit comes from speeding up calculations/simulations. Let me give you a simple example. Someone has given you a book of 1,000 pages and has asked you to write a comprehensive summary of each page in one line. It will take you a considerable amount of time to read each page carefully and write a summary of that page, collate all the summaries and put them in a usable format.

Now imagine if you could use 1,000 colleagues with similar skills and capabilities as yourself. You send each of them one page and asked them to summarise it in one line and return that summary to you. The process could potentially take a lot less time, provided you could rely on rapid communication, including email and/or other forms of computer connectivity. This is what parallel processing and HPC do. The mathematical/analytical task or simulation of a more complex task is divided amongst large numbers of nodes, each with many processors and each processor with many cores. That task, performed by one individual, could take weeks or months on an ordinary laptop or personal computer, but HPC can give you the result in minutes.

In medical science it facilitates the analysis and categorisation of large amounts of complex patient data. For example, in the US the National Program for Cancer Registries collects data pertaining to around 550,000 cases each year. That data is unstructured and previously went through a mostly manual process for formatting and categorisation. This meant that for each set of data there was a significant delay - often up to two years - between collection, analysis and the publication of results. The doctors were therefore dealing with data that was out of date, as it did not reflect results achieved with new treatments. But now researchers at Oak Ridge National Laboratory have explored how to leverage high-performance computing to semi-automate methods in order to extract information using deep learning to look at data analytics.

Another example is the ‘living heart’ example at Stanford. In that project, researchers used HPC to create a virtual human heart. The ultra-precise, digital model let researchers test the cardiovascular implications of surgical procedures and new drugs, taking into account more than 25 million variables.

HPC has played a significant role in genomics too. A genome is the complete set of DNA in an organism. This information determines all biological functions and the myriad of variations that make some of us immune and others susceptible to various diseases. ‘Next-generation sequencing’ technologies in the late 2000s led to a dramatic decrease in the cost of DNA sequencing. This, combined with advances in HPC storage and computing, helped create genomic data banks used extensively in medical and biological sciences.

Covid-19 medical research and data gathering is another example. From the data science of disease propagation and the simulation of antibody protein structures to the social understanding of the human response to the crisis, all have relied on HPC. Supercomputers are used to run complex mathematical models, to transform vast volumes of evolving Covid data into simulations of biological and chemical processes. These simulations advance the medical scientist’s understanding of the new strain of virus and the interactions of the human. All this is helping to speed up the development of new treatments.

current Limits of  HPC

Until a few years ago the bulk of HPC relied on the Intel processor, itself subject to the limitations of Moore’s law. In 1965 Gordon E Moore, an American engineer and co-founder of Intel Corporation, noted:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

In 1975 Moore revised his prediction, changing his time frame from 10 to two years, as the rate of increase in components had decelerated. As time would tell, the number of components would inevitably double approximately every 18 months.

The problem was that Moore’s Law relies on transistors shrinking in size, and here eventually the laws of physics intervene. In particular, electron tunnelling prevents the length of a gate - the part of a transistor that turns the flow of electrons on or off - from being smaller than 5nm (nanometres). By 2020 we had reached the limits of this law. As Charles Leiserson - a computer scientist at MIT and a pioneer of parallel computing - said in February 2020 in reference to the limits of Intel chips: “It’s over. This year that became really clear.”

Instead we are seeing the growing use of GPUs (graphical processor units) and research into quantum computing. GPUs are specialised processors originally designed to accelerate ‘graphics rendering’. However, they can also process many pieces of data simultaneously, making them useful for high performance computing - especially machine learning.

A central processing unit (CPU) works together with a GPU to increase the throughput of data and the number of concurrent calculations within an application. Using the power of parallelism, a GPU can complete more work in the same amount of time, compared to a CPU. All this has paved the way for the GPU producer, Nvidia, to overtake Intel in 2020. According to the Nasdaq stock market index, in July 2020 Nvidia became worth more than Intel and here China has made its own innovations. By early 2020 it was clear that the Chinese GPU manufacturer, Changsha Jingjia Microelectronics, was at the research stage of creating an Nvidia GTX 1080-level graphics card and by all accounts the company has made major advances in the last 12 months.

Quantum

How is a quantum computer different from a traditional one? Traditional computers use ‘bits’ to represent information, whereas quantum computers use ‘qubits’ (quantum bits), which means that potentially they are infinitely more powerful.

Recent reports suggest that China has made great advances in this area and, if they are correct, this marks a significant change. According to a paper published in the journal Science on December 3 2020, a team at the University of Science and Technology of China in central Hefei, used a photon-based quantum computer to complete a boson-sampling problem calculation almost 100 times quicker than existing supercomputers.1 This was achieved by manipulating particles of light, in complete contrast with Google’s supercomputer, which relied on ultra-cold superconducting chips.

Xinhua, China’s official news agency, claimed the experimental quantum computer, which comprises lasers, mirrors, prisms and photon detectors, could process 10 billion times faster than the quantum computer unveiled by Google last year.

According to the journal Nature,

Starting from laser pulses, the researchers encoded the information in the spatial position and the polarisation of particular photon states - the orientation of the photons’ electromagnetic fields. These states were then brought together to interfere with one another and generate the photon distribution that represents the output. The team used photodetectors capable of registering single photons to measure that distribution, which in effect encodes the calculations that are so hard to perform classically.2

Having said that, it should be noted that unlike Google’s quantum computer, the photonic circuit is not yet programmable and therefore cannot be used for practical problems. However, if the scientists in Hefei succeed in building an efficient programmable chip, this would be a significant development.

Jiuzhang

Then there is Jiuzhang, which takes its name from an ancient Chinese mathematical text. It can perform an extremely esoteric calculation, called ‘Gaussian boson sampling’, in 200 seconds. The same task would take the world’s fastest classical supercomputer, Fugaku, around 600 million years!

Fabio Sciarrino, a quantum physicist at Sapienza University of Rome, told Science News, an outlet based in the United States, that his first impression of the Chinese quantum computer was, simply “wow”. Barry Sanders, director of the Institute for Quantum Science and Technology at the University of Calgary, Canada, called the feat “one of the most significant results in the field of quantum computing.”

Anton Zeilinger, a noted quantum physicist and president of the Austrian Academy of Sciences, said that, following this experiment, there is a very good chance that quantum computers may be used very broadly in the near future:

I’m extremely optimistic in that estimate, but we have so many clever people working on these things, including my colleagues in China. So, I am sure we will see quite rapid development.

Quantum machines can take computational shortcuts when simulating extremely complex scenarios, whereas conventional computers have to ‘brute force’ their way to a solution, taking significantly more time in the process. Moreover, the computing power of quantum machines can increase exponentially, as more qubits are added. “The feat cements China’s position in the first echelon of nations in quantum computing,” the University of Science and Technology of China said in a news release.

Applications

Both in traditional science as well as social science deep learning and machine learning are offering new ways to train models and classify data. The aim here is to teach computers to learn from examples, memorise the data they have been given and use them to classify inputs.

In order to teach a computer ways of classifying input we use what is referred to as the ‘standard machine learning approach’. We select aspects of an image - for example, its corners and boundaries - to train the computer. Every object presented is recognised using the references learnt by the computer and then evaluated.

Deep learning uses more advanced techniques. Images of objects/scenes are directly fed into the deep learning algorithm. When there is a large amount of data and tens of thousands of images, it is necessary to use a high-performance GPU to analyse and subsequently recognise objects with reasonable accuracy.

The time you need to build a model depends on the capability and number of GPUs you have. However, the use of quantum computing will be a game-changer. In a paper published in Physical Review Letters, Vedran Dunjko and his co-authors show how quantum effects can provide quadratic improvements in learning efficiency and exponential improvements in performance, when compared to classical techniques in a wide range of machine learning programs:

The progress in machine learning critically relies on processing power. Moreover, the type of underlying information processing that many aspects of machine learning rely upon is particularly amenable to quantum enhancements. As quantum technologies emerge, quantum machine learning will play an instrumental role in our society - including deepening our understanding of climate change, assisting in the development of new medicine and therapies, and also in settings relying on learning through interaction, which is vital in automated cars and smart factories.

Last year, a team of theorists devised a quantum algorithm that solves this kind of machine learning problem in logarithmic time rather than polynomial time. That is major development. However, this work was entirely theoretical. But it is this algorithm that the Chinese researchers have implemented on their quantum computer for the first time.

As in any new area of science, quantum computing has its critics. Amongst them is Gil Kilai, a mathematician at Hebrew University in Jerusalem, who argues that quantum computing is a mirage. According to Kilai, all physical systems are noisy and qubits kept in highly sensitive “superpositions” will be corrupted by any interaction with the outside world. Getting the noise down would violate certain fundamental theorems of computation. When Kilai talks about ‘noise’ he is referring to the errors in a process and the likelihood of such errors affecting the outcome of the process.

It should be noted that this is a minority view, originally expressed a few years ago, and we now have major advances in quantum error correction protocols, addressing this particular problem. One approach is presented in a paper in Nature by Benjamin Nachman, Miroslav Urbanek, Wibe A de Jong and Christian W Bauer, entitled ‘Unfolding quantum computer readout noise’. The paper addresses readout errors - an important class of qubit errors - with ‘matrix inversion’, using a response matrix built from simple operations to probe the rate of transitions from known initial quantum states to readout outcomes.

Of course, we still face a number of major challenges in the development of superfast quantum computers, mainly because they are fragile, and even the slightest fault can cause a major error in the calculations and simulations. However, a considerable amount of time and effort is going into resolving such errors. For example, IBM is able to detect both types of quantum errors (bit-flip and phase-flip) that will occur in any real quantum computer for the first time. In other words, necessary steps are available allowing us quantum error correction.

Even before December’s announcements, we knew that China was reaching important milestones in quantum sciences, including the first quantum science satellite and a quantum network connecting Beijing and Shanghai. There is speculation that a “quantum arms race will transform warfare”.3 This assertion is partly based on the assumption that quantum computers will break internet encryption. But we are still a long way away from this scenario, as current working quantum computers have around 50 qbits, which means that air power and nuclear capability remain the determining factor, when it comes to deciding who is the global hegemon.

However, there is no doubt that China’s scientific achievements at a time of aggressive economic and strategic competition means we can expect a US attempt to catch-up.


  1. Boson sampling refers to calculation of the probability distribution of many bosons - a category of fundamental particle that includes photons, whose quantum waves interfere with one another in a way that essentially randomises the position of the particles. It was devised in 2011 by two computer scientists, Scott Aaronson and Alex Arkhipov. The probability of detecting a boson at a given position can be calculated from an equation in many unknowns.↩︎

  2. nature.com/articles/d41586-020-03434-7.↩︎

  3. technologyreview.com/2019/01/03/137969/us-china-quantum-arms-race.↩︎