The Coming Breakthroughs in the Global Supercomputing Race

In our August 31, 2015 article (“The U.S. Needs to Rejuvenate the Global Supercomputing Race“), we expressed our concerns regarding the state of the global supercomputing industry; specifically, from a U.S. perspective, the sustainability of Moore’s Law, as well as increasing competition from the Chinese supercomputing industry. Below is a summary of the concerns that we expressed:

  • Technological innovation, along with increasing access to cheap, abundant energy, is the lifeblood of a growing, modern economy. As chronicled by Professor Robert Gordon in “The Rise and Fall of American Growth,” U.S. productivity growth (see Figure 1 below; sources: Professor Gordon & American Enterprise Institute)–with the exception of a brief spurt from 1997-2004–peaked during the period from the late 1920s to the early 1950s; by 1970, much of today’s everyday household conveniences, along with the most important innovations in transportation & medicine, have already been invented and diffused across the U.S. Since 1970, almost all of the U.S. productivity growth could be attributed to the adoption and advances in the PC, investments in our fiber optic and wireless networks, along with the accompanying growth of the U.S. software industry (other impactful technologies since the 1970s include: the advent of hydraulic fracturing in oil & gas shale, ultra deepwater drilling in the Gulf of Mexico, as well as the commercialization of alternative energy and more efficient battery storage systems, as we first discussed in our July 27, 2014 article “How Fracking Saved the U.S. Economy“). This means that a stagnation in the U.S. computing or communications industries would result in an invariable slowdown in U.S/global productivity growth;

americanproductivitygrowth

  • The progress of the U.S. supercomputing industry, as measured by the traditional FLOPS (floating-point operations per second) benchmark, had experienced a relative stagnation when we last wrote about the topic in August 2015. E.g. in 2011, both Intel and SGI seriously discussed the commercialization of an “exascale” supercomputer (i.e. a system capable of performing 1 x 10^18 calculations per second) by the 2019-2020 time frame. As of today, the U.S. supercomputing community has pushed back its target time frame of building an exascale supercomputer to 2023;
  • At the country-specific level, the U.S. share of global supercomputing systems has been declining. As recent as 2012, the U.S. housed 55% of the world’s top 500 supercomputing systems; Japan was second, with 12% of the world’s supercomputing systems, with China (8%) in third place. By the summer of 2015, the U.S. share of the world’s top 500 supercomputing systems has shrunk to 46%, although both Japan and China remained a distant second at 8%. Today, the Chinese supercomputing industry has led an unprecedented surge to claim parity with the U.S, as shown in Figure 2 below.

Figure 2: China – Reaching Parity with the U.S. in the # of Top 500 Supercomputerstop500

Since the invention of the transistor in the late 1940s and the advent of the supercomputing industry in the 1960s, the U.S. has always been the leader in the supercomputing industry in terms of innovation, sheer computing power, and building the customized software needed to take advantage of said supercomputing power (e.g. software designed for precision weather forecasting, gene sequencing, airplane and automobile design, protein folding, and now, artificial intelligence, etc.). With U.S. economic growth increasingly dependent on innovations in the U.S. computing industry and communications network–and with China now threatening to surpass the U.S. in terms of supercomputing power (caveat: China’s HPC software industry is probably still a decade behind)–it is imperative for both U.S. policymakers and corporations to encourage and provide more resources for the U.S. to stay ahead of the supercomputing race.

Unlike the tone of our August 31, 2015 article, however, we have grown more hopeful, primarily because of the following developments:

  • Moore’s Law is still alive and well: At CES 2017 in Las Vegas, Intel declared that Moore’s Law remains relevant, with a second-half target release date for its 10 nano-meter microprocessor chips.  At a subsequent nationally-televised meeting with President Trump earlier this month, Intel CEO Brian Krzanich announced the construction of its $7 billion Fab 42 in Arizona, a pilot plant for its new 7 nano-meter chips. Commercial production of the 7nm chips is schedule to occur in the 2020-2022 time frame, with most analysts expecting the new plant to incorporate more exotic technologies, such as gallium-nitride as a semiconductor material. The next iteration is 5nm chips; beyond 5 nano-meters, however, a more fundamental solution to extend Moore’s Law will need to occur, e.g. commercializing a graphene-based transistor;
  • GPU integration into supercomputing systems: The modern-day era of the GPU (graphics process unit) began in May 1995, when Nvidia commercialized its first graphics chip, the NV1, the first commercially-available GPU capable of 3D rendering and video acceleration. Unlike a CPU, the GPU is embedded with multiple threads of processing power, allowing it to perform many times more simultaneous calculations relative to a CPU. Historically, the supercomputing industry had been unable to take advantage of the sheer processing power of the GPU, given the lack of suitable programming languages specifically designed for GPUs. When the 1.75 petaflop Jaguar supercomputer was unveiled by Oak Ridge National Laboratory in 2009, it was notable as Jaguar was one of the first supercomputers to be outfitted with Nvidia GPUs. Its direct successor, the 17.59 petaflop Titan, was unveiled in 2012 with over 18,000 GPUs. At the time, this was a concern for two reasons: 1) hosting over 18,000 GPUs within a single system was unprecedented and would doom the project to endless failures and outages, and 2) there were too few programming codes to take advantage of the sheer processing power of the 18,000 GPUs. These concerns have proven to be unfounded; today, GPUs are turning home PCs into supercomputing systems while Google just rolled out a GPU cloud service focused on serving AI customers;
  • AI, machine-learning software commercialization: Perhaps one of the most surprising developments in recent years has been the advent of AI, machine-learning software, yielding results that were unthinkable just five years ago. These include: 1) Google DeepMind’s AlphaGo, which defeated three-time European Go champion Fan Hui by 5-0 in 2015, and finally, the world Go champion Ke Jie earlieir this year, 2) Carnegie Mellon’s Libratus, which defeated four of the world’s top poker players over 20 days of playing, and 3) the inevitable commercialization of Level 5 autonomous vehicles on the streets of the U.S., likely by the 2021-2025 time frame. Most recently, Microsoft and the University of Cambridge teamed up to develop a machine learning system capable of writing its own code. The advent of AI in the early 21st century is likely to be a seminal event in the history of supercomputing;
  • Ongoing research into quantum computing: The development of a viable, commercial quantum computer is gaining traction and is probably 10-20 years away from realization. A quantum computer is necessary for the processing of tasks that are regarded as computationally intractable on a classical computer. These include: 1) drug discovery and the ability to customize medical treatments based on the simulation of proteins and how they interact with certain drug combinations, 2) invention of new materials through simulations at the atomic level. This will allow us to build better conductors and denser battery systems, thus transforming the U.S. energy infrastructure almost overnight, and 3) the ability to run simulations of complex societal and economic systems. This will allow us to more efficiently forecast economic growth and design better public policies and urban planning tools.
%d bloggers like this: