The Global Productivity Riddle and the Supercomputing Race

Neoclassical (mainstream) economists define productivity as economic output (usually GDP) per unit of input, with the latter typically being capital and labor. An analysis of economic growth in the 20th century, however, suggests that physical capital per worker accounted for at most 15% of this increase. Fully 85% of productivity growth in the 20th century cannot be explained by mainstream economists.

Nobel Laureate Robert Solow was the first to propose that fully 85% of productivity growth in the 20th century was simply “a measure of our ignorance,” which he labels as technological progress.  Robert Ayres, a renowned economist and physicist, asserts that the increasing consumption of energy in the 20th century explains nearly all of the productivity growth in the 20th century. We will explore this issue in a future newsletter, but not surprisingly, the future/survival of the global economy is intrinsically linked with the availability of (cheap) energy sources. With the exception of a few industries (computers, communications, healthcare) and advances over the last decade (solar PV, hybrid vehicles, carbon composites on the Boeing 787, the smart grid, etc.), productivity growth in industrial countries since the end of WWII has relied mostly on the increasing consumption of fossil fuels. While natural gas will act as a legitimate “bridge fuel” for at least the next decade, it is imperative to make ongoing investments in alternative energy technologies– as the “externalities” of burning fossil fuels remain high, especially in parts of China and other densely populated areas.

Speaking of advances in the computing industry, the world of supercomputing and supercomputing research remains an area of U.S. domination. Last month, the 40th semi-annual edition of the top 500 list of the world’s most powerful supercomputers was published at the SC2012 supercomputing conference in Salt Lake City. Last year’s biggest surprise (at least to those outside the supercomputing community) was the ascendance of the Japanese in the rankings. At the time, the number one supercomputer was the “K Computer” built by Fujitsu, using its own proprietary SPARC64 Vlllfx CPUs.  Powered by 88,128 CPUs (each with eight cores, for a total 705,024 cores), the “K Computer” is capable of a peak performance of 10.51 petaflops.

Beginning this year, however, the U.S. regained the supercomputing crown, when the IBM Blue Gene-powered “Sequoia” at the Lawrence Livermore Laboratory came online with a staggering peak performance of 16.33 petaflops. The latest November 2012 list ushered in a new number one ranked supercomputer–the AMD CPU/Nvidia GPU powered monster, “Titan”–coming in at 17.59 petaflops. More important, Titan is slated for civilian use. One of its first projects is to run simulations designed to improve the efficiency of diesel and biofuel engines.

On the other hand, the Chinese, which captured the supercomputing race in October 2010 with its Tianhe-1A supercomputer at the National Supercomputing Center in Tianjin (rated at 2.57 petaflops) has now sunk to 8th place.

From a geopolitical standpoint, the United States has re-occupied the top spot after ceding to the Japanese last year, and the Chinese the year before.  On a country basis, the U.S. houses 55% of the top 500 supercomputers, up from 43% just 12 months ago (by supercomputing power; note that the NSA – which houses some of the most powerful systems in the world – stopped reporting in 1998).  Japan is second, with 12% of the world’s supercomputing power.  Rounding out the top five are China (8%), Germany (6%), and France (5%). The UK, which ranked third just three years ago (with 5.5% of the world’s supercomputing power), is now in 6th place, housing just 4.5% of the world’s supercomputing power.

Aside from providing the most up-to-date supercomputing statistics, the semi-annual list also publishes the historical progress of global supercomputing power – as well as a reasonably accurate projection of what lies ahead.  Following is a log chart summarizing the progression of the top 500 list since its inception in 1993, along with a ten-year projection:supercomputing

Today, a desktop with an Intel Core i7 processor operates at about 100 gigaflops (note that we are ignoring the GPU in our graphics processor from our calculations) – or the equivalent of an “entry-level” supercomputer on the top 500 list in 2001, or the most powerful supercomputer in the world in 1993.  On the highest end, the power of the Titan Supercomputer is equivalent to the combined performance of the world’s top 500 supercomputers just four years ago.  Moreover,the combined performance of “Sequoia” and “Titan,” makes up more than 20% of the combined performance of all the supercomputers in the top 500 list today.  By the 41st semi-annual edition of the Top 500 supercomputers next June, the combined performance of the world’s 500 fastest supercomputers should exceed 200 petaflops (compared to 162 petaflops today, and just 74 petaflops a year ago).

Simulations that would have taken 10 years of computing hours for the most powerful supercomputer two years ago take just a year on Titan (roughly, since Linpack—the benchmark used to measure supercomputing performance—is not exactly representative of real-world supercomputing performance).  Tasks that take an immense amount of computing time today – such as precision weather forecasts, gene sequencing, airplane and automobile design, protein folding, etc. – will continue to be streamlined as newer and more efficient processors/software are designed.  By 2018-2019, the top supercomputer should reach a sustained performance of an exaflop (i.e. 1,000 petaflops)—this is both SGI’s and Intel’s goal.  IBM believes that such a system is needed to support the “Square Kilometre Array”—a radio telescope in development that will be able to survey the sky 10,000 times faster than ever before, and 50 times more sensitive than any current radio instrument—and will provide better answers to the origin and evolution of the universe.  The ongoing “democratization” of the supercomputing industry would also result in improvements in solar panel designs, better conductors, more effective drugs, etc.  As long as global technology innovation isn’t stifled, the outlook for global productivity growth – and by extension, global economic growth and standard of living improvements – will remain bright for years to come.  Advances in material designs would also propel the private sector’s efforts to commercialize space travel and reduce the costs of launching satellites.  Should the quantum computer be commercialized soon (note that quantum computing advances are coming at a dramatic rate) we should get ready for the next major technological revolution (and secular bull market) by 2015 to 2020.  Make no mistake: The impact of the next technological revolution will dwarf that of the first and second industrial revolutions.

%d bloggers like this: