Visit additional Tabor Communication Publications
HPC Matters is a joint blog consisting of contributors from the Tabor Communications team on their observations and insights into HPC matters.
December 14, 2010
Exascale. Say it, "exascale" -- it even sounds fast, maybe its the "x" with the multiplicative qualities it donotes. Appropriate, since exascale comptuters will be 1,000x faster than today's current crop of petascale machines. Forget the fact that we're barely into petaflop territory, we're always on to the next big thing...and for those in the supercomputing/HPC space, there's one word that conjures up future machines with almost unimaginable capacity and that's exascale.
But with all good things, there's a catch right. Software? Well, that's one, getting software rewritten to take advantage of those manycore beasts, but with enough time and effort, software is doable. So is hardware -- string enough manycore processors together, and voila. However, an even more pressing concern is energy. Time is money is the old saying, but energy is also money. Especially true since the world is still relying on fossil fuels that won't be around forever. At current rates of demand, oil and natural gas won't be around another century. So as these fuels become more limited, and therefore precious, prices will only increase.
The Institution of Engineering and Technology elucidates the challenge of getting to exascale in a recent article, stating that it's quite possible to build an exascale supercomputer right now, but you'd need a dozen nuclear stations to power it. That is why there will need to be big changes to the underlying hardware and software of the next generation of supercomputers, or they just won't be econmically-viable.
There's a certain irony in the fact that the same machines that will be used to help solve the world's energy and environmental problems themselves contribute to the problem.
Martin Curley, senior principal engineer and director of Intel Labs Europe, illustrates the extreme scale of these machines: "An exascale computer has the equivalent power of 50 million laptops. Stacked on top each other, they would be 1,000 miles high and weight more than 100,000 tonnes."
Wilfried Verachtert, high-performance computing project manager at Belgian research institute IMEC, says that an exascale computer made from existing technology would require 14 nuclear reactors. "There are a few very hard problems we have to face in building an exascale computer. Energy is number one. Right now we need 7,000MW for exascale performance. We want to get that down to 50MW, and that is still higher than we want."
Different companies are looking at different ways to reduce that power demand. Shrinking processor design will allow more processors on each chip. By 2018, 10nm chip fabrication process should be able to fit about 20 times more processors than today's chips can. Intel is working on these smaller designs.
Bill Dally, processor at Stanford University and chief scientist at graphics chipmaker NVIDIA, says that 11nm process technology will enable 5,000 cores on-chip.
SGI is looking at using low-power processors in supercomputers, specifically it is experimenting with using the Atom processors that were developed by Intel for handheld computers.
SGI is also working with field-programmable gate arrays (FPGAs). These are chips that can be reconfigured after manufacturing. Steve Teig, president and CTO of FPGA specialist Tabula, explains that FPGAs allow developers to change the way data moves around a computer. Instead of moving the data to the processor, you can reconfigure and compute in place. Despite advantages, FPGAs are still quite power-hungry.
But with all those cores, you need to be able to exploit that parallelism, another challenge. And then there's also reliability concerns. The bigger the machine, the more parts will fail.
Despite these challenges, there's little doubt among learned professionals that society will achieve the goal of getting to exascale, and it's not some far-off goal, but will happen in just a few years. The rate at which supercomputing advances has been pretty predictable, with each decade ushering in a thousand-fold increase in power. Imec's Verachtert sums up: "In 1997, we saw the first terascale machines. A few years ago, petascale appeared. We will hit exascale in around 2018."
Posted by Tiffany Trader - December 14, 2010 @ 3:30 PM, Pacific Standard Time
Tiffany Trader is the editor of HPC in the Cloud. With a background in HPC publishing, she brings a wealth of knowledge and experience to bear on a range of topics relevant to the technical cloud computing space.
No Recent Blog Comments
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.