The Coming Breakthroughs in the Global Supercomputing Race

In our August 31, 2015 article (“The U.S. Needs to Rejuvenate the Global Supercomputing Race“), we expressed our concerns regarding the state of the global supercomputing industry; specifically, from a U.S. perspective, the sustainability of Moore’s Law, as well as increasing competition from the Chinese supercomputing industry. Below is a summary of the concerns that we expressed:

  • Technological innovation, along with increasing access to cheap, abundant energy, is the lifeblood of a growing, modern economy. As chronicled by Professor Robert Gordon in “The Rise and Fall of American Growth,” U.S. productivity growth (see Figure 1 below; sources: Professor Gordon & American Enterprise Institute)–with the exception of a brief spurt from 1997-2004–peaked during the period from the late 1920s to the early 1950s; by 1970, much of today’s everyday household conveniences, along with the most important innovations in transportation & medicine, have already been invented and diffused across the U.S. Since 1970, almost all of the U.S. productivity growth could be attributed to the adoption and advances in the PC, investments in our fiber optic and wireless networks, along with the accompanying growth of the U.S. software industry (other impactful technologies since the 1970s include: the advent of hydraulic fracturing in oil & gas shale, ultra deepwater drilling in the Gulf of Mexico, as well as the commercialization of alternative energy and more efficient battery storage systems, as we first discussed in our July 27, 2014 article “How Fracking Saved the U.S. Economy“). This means that a stagnation in the U.S. computing or communications industries would result in an invariable slowdown in U.S/global productivity growth;

americanproductivitygrowth

  • The progress of the U.S. supercomputing industry, as measured by the traditional FLOPS (floating-point operations per second) benchmark, had experienced a relative stagnation when we last wrote about the topic in August 2015. E.g. in 2011, both Intel and SGI seriously discussed the commercialization of an “exascale” supercomputer (i.e. a system capable of performing 1 x 10^18 calculations per second) by the 2019-2020 time frame. As of today, the U.S. supercomputing community has pushed back its target time frame of building an exascale supercomputer to 2023;
  • At the country-specific level, the U.S. share of global supercomputing systems has been declining. As recent as 2012, the U.S. housed 55% of the world’s top 500 supercomputing systems; Japan was second, with 12% of the world’s supercomputing systems, with China (8%) in third place. By the summer of 2015, the U.S. share of the world’s top 500 supercomputing systems has shrunk to 46%, although both Japan and China remained a distant second at 8%. Today, the Chinese supercomputing industry has led an unprecedented surge to claim parity with the U.S, as shown in Figure 2 below.

Figure 2: China – Reaching Parity with the U.S. in the # of Top 500 Supercomputerstop500

Since the invention of the transistor in the late 1940s and the advent of the supercomputing industry in the 1960s, the U.S. has always been the leader in the supercomputing industry in terms of innovation, sheer computing power, and building the customized software needed to take advantage of said supercomputing power (e.g. software designed for precision weather forecasting, gene sequencing, airplane and automobile design, protein folding, and now, artificial intelligence, etc.). With U.S. economic growth increasingly dependent on innovations in the U.S. computing industry and communications network–and with China now threatening to surpass the U.S. in terms of supercomputing power (caveat: China’s HPC software industry is probably still a decade behind)–it is imperative for both U.S. policymakers and corporations to encourage and provide more resources for the U.S. to stay ahead of the supercomputing race.

Unlike the tone of our August 31, 2015 article, however, we have grown more hopeful, primarily because of the following developments:

  • Moore’s Law is still alive and well: At CES 2017 in Las Vegas, Intel declared that Moore’s Law remains relevant, with a second-half target release date for its 10 nano-meter microprocessor chips.  At a subsequent nationally-televised meeting with President Trump earlier this month, Intel CEO Brian Krzanich announced the construction of its $7 billion Fab 42 in Arizona, a pilot plant for its new 7 nano-meter chips. Commercial production of the 7nm chips is schedule to occur in the 2020-2022 time frame, with most analysts expecting the new plant to incorporate more exotic technologies, such as gallium-nitride as a semiconductor material. The next iteration is 5nm chips; beyond 5 nano-meters, however, a more fundamental solution to extend Moore’s Law will need to occur, e.g. commercializing a graphene-based transistor;
  • GPU integration into supercomputing systems: The modern-day era of the GPU (graphics process unit) began in May 1995, when Nvidia commercialized its first graphics chip, the NV1, the first commercially-available GPU capable of 3D rendering and video acceleration. Unlike a CPU, the GPU is embedded with multiple threads of processing power, allowing it to perform many times more simultaneous calculations relative to a CPU. Historically, the supercomputing industry had been unable to take advantage of the sheer processing power of the GPU, given the lack of suitable programming languages specifically designed for GPUs. When the 1.75 petaflop Jaguar supercomputer was unveiled by Oak Ridge National Laboratory in 2009, it was notable as Jaguar was one of the first supercomputers to be outfitted with Nvidia GPUs. Its direct successor, the 17.59 petaflop Titan, was unveiled in 2012 with over 18,000 GPUs. At the time, this was a concern for two reasons: 1) hosting over 18,000 GPUs within a single system was unprecedented and would doom the project to endless failures and outages, and 2) there were too few programming codes to take advantage of the sheer processing power of the 18,000 GPUs. These concerns have proven to be unfounded; today, GPUs are turning home PCs into supercomputing systems while Google just rolled out a GPU cloud service focused on serving AI customers;
  • AI, machine-learning software commercialization: Perhaps one of the most surprising developments in recent years has been the advent of AI, machine-learning software, yielding results that were unthinkable just five years ago. These include: 1) Google DeepMind’s AlphaGo, which defeated three-time European Go champion Fan Hui by 5-0 in 2015, and finally, the world Go champion Ke Jie earlieir this year, 2) Carnegie Mellon’s Libratus, which defeated four of the world’s top poker players over 20 days of playing, and 3) the inevitable commercialization of Level 5 autonomous vehicles on the streets of the U.S., likely by the 2021-2025 time frame. Most recently, Microsoft and the University of Cambridge teamed up to develop a machine learning system capable of writing its own code. The advent of AI in the early 21st century is likely to be a seminal event in the history of supercomputing;
  • Ongoing research into quantum computing: The development of a viable, commercial quantum computer is gaining traction and is probably 10-20 years away from realization. A quantum computer is necessary for the processing of tasks that are regarded as computationally intractable on a classical computer. These include: 1) drug discovery and the ability to customize medical treatments based on the simulation of proteins and how they interact with certain drug combinations, 2) invention of new materials through simulations at the atomic level. This will allow us to build better conductors and denser battery systems, thus transforming the U.S. energy infrastructure almost overnight, and 3) the ability to run simulations of complex societal and economic systems. This will allow us to more efficiently forecast economic growth and design better public policies and urban planning tools.

The U.S. Needs to Rejuvenate the Global Supercomputing Race

Technology, along with increasing access to cheap energy, is the lifeblood of a growing, modern economy. As we discussed in our December 2, 2012 article (“The Global Productivity Riddle and the Supercomputing Race“), fully 85% of productivity growth in the 20th century could be attributed to technological progress, as well as increasing accessibility/sharing of cheap energy sources due to innovations in oil and natural gas hydraulic fracturing, ultra-deep water drilling, solar panel productivity, and the commercialization of Generation III+ nuclear power plants and deployment of smart power grids.

Perhaps the most cited example where the combined effects of technological and human capital investments have had the most economic impact is the extreme decline in computing and communication costs. Moore’s Law, the ability of computer engineers to double the amount of computing power in any given space every 2 years, has been in effect since the invention of the transistor in the late 1940s. Parallel to this has been the rise of the supercomputing industry. Started by Seymour Cray at Control Data Corporation in the 1960s, the supercomputing industry has played a paramount role in advancing the sciences, most recently in computationally intensive fields such as weather forecasting, oil and gas exploration, human genome sequencing, molecular modeling, and physical simulations with the purpose of designing more aerodynamic aircrafts or better conducting materials. No doubt, breakthroughs in more efficient supercomputing technologies and processes is integral to the ongoing growth in our living standards in the 21st century.

Unfortunately, advances in both the U.S. and global supercomputing industry has lagged in the last several years. Every six months, a list of the world’s top 500 most powerful supercomputers is published. The latest list was compiled in June 2015; aside from providing the most up-to-date supercomputing statistics, the semi-annual list also publishes the historical progress of global supercomputing power, each country’s share of global supercomputing power, as well as a reasonable accurate projection of what lies ahead. Figure 1 below is a log chart summarizing the progression of the top 500 list from its inception in 1993.

Figure 1: Historical Performance of the World’s Top 500 Supercomputers

top500progressAs shown in Figure 1 above, both the sum of the world’s top 500 computing power, as well as the #1 ranked supercomputer, has remained relatively stagnant over the last several years. Just three years ago, there was serious discussion of the commercialization of an “exaflop” supercomputer (i.e. a supercomputer capable of 1 x 10^18 calculations per second) by the 2018-2019 time frame. Today, the world’s top computer scientists are targeting a more distant time frame of 2023.

From the U.S. perspective, the slowdown in the advent of the supercomputing industry is even more worrying. Not only has innovation slowed down at the global level, but the U.S. share of global supercomputing power has been declining as well. Three years ago, the U.S. housed 55% of the world’s top 500 supercomputing power; Japan was second, with 12% of the world’s supercomputing power. Rounding out the top five were China (8%), Germany (6%), and France (5%). Today, the U.S. houses only 46% of the world’s supercomputing power, with countries such as the UK, India, Korea, and Russia gaining ground.

Figure 2: Supercomputing Power Distributed by Country

top500countryshare

Bottom line: Since the invention of the transistor in the late 1940s and the advent of the supercomputing industry in the 1960s, the U.S. has always led the supercomputing industry in terms of innovation and sheer computing power. With countries such as China and India further industrializing and developing their computer science/engineering expertise (mostly with government funding), U.S. policymakers must encourage and provide more resources to stay ahead of the supercomputing race. To that end, President Obama’s most recent executive order calling for the creation of a National Strategic Computing Initiative–with the goal of building an “exascale” supercomputer–is a step in the right direction. At this point, however, whether the industry can deploy an energy-efficient exascale supercomputer by the less ambitious 2023 time frame is still an open question.