Visit additional Tabor Communication Publications
May 14, 2012
While Thomas Sterling’s interview about the impossibility of reaching zettaflops made a lot of sense, the history of making negative predictions about technology is often an embarrassing one. Here are three examples:
"I think there is a world market for maybe five computers."
Thomas Watson, chairman of IBM, 1943.
"There is no reason anyone would want a computer in their home."
Ken Olson, president, chairman and founder of Digital Equipment Corp., 1977.
“Next Christmas the iPod will be dead, finished, gone, kaput.”
Sir Alan Sugar, British entrepreneur, 2005
If we wind back the clock to the days of megaflops, there were no commodity microprocessors (i.e,. the killer micros that put paid to many proprietary architectures), there were no multicore processor. Indeed the Cray-1 was a single processor machine. There was no OpenMP, no MPI and compute accelerators were the size of a fridge and cost tens of thousands of dollars.
Who would have thought that today’s HPC systems would use compute accelerators the size of a paperback book that were millions of times more powerful and cost a small fraction of the price? And I’ve lost count of how many times I’ve been told that the next generation of microprocessors will be the last major advance as the photolithography techniques used to manufacture chips had reached a limit, beyond which decreasing the size of devices was impossible. The industry has achieved the impossible before, and will do so again.
Moore’s Law, which states that the number of transistors placed on an integrated circuit would double every two years, is often understood to mean that performance will double every two years (some say 18 months). What started life as an observation, has become the target that marketing men guarantee and engineering budgets are set against. And the straight line graphs that technologists use to predict the future suggest that zettaflops systems will be built around the year 2030.
Professor Sterling pioneered the use of compute clusters and is a Gordon Bell Prize winner. He has excellent credentials in HPC, and I can’t refute a single fact that he put forward in his interview -- indeed, I am generally in full agreement with insights on the issues the industry faces -- but I am certain that he is wrong in his conclusion.
Arthur C. Clarke, the science fiction writer, identified what he called the "three laws of prediction," reflecting an optimistic view of ingenuity:
I have no idea what a zettaflops system will look like, but it will be magic.
About the author
John Barr covers IT early adoption and innovation in High Performance Computing at 451 Research. He is also responsible for the company's research activities within the European Commission Framework Program. John has over 30 years of experience in the IT industry, initially writing compilers and development tools for High Performance Computing platforms. The bulk of his career has been spent in a variety of technical roles at HPC systems vendors, delivering training, running benchmarks and providing pre and post sales customer support. John's core technical skill is application performance analysis, optimization and parallelization.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.