Visit additional Tabor Communication Publications
February 27, 2013
What will future supercomputers bring to the world? Personally, we hope that they will be used to finally design George Jetson's flying car. But there are better experts out there making smarter predictions.
The future was, of course, a popular topic at SC12 in Salt Lake City last November. No flying cars, but increased energy efficiency, improved weather forecasting, a better understanding of the universe, and faster discovery of new drugs were all on the agenda.
IEEE has now put together a summary of supercomputing predictions and challenges made by a few of its members at SC12.
Rajeev Thakur, technical program chair of SC12 and deputy director of the Mathematics and Computer Science Division at Argonne National Laboratory, wins the prize for the most predictions. He foresees better batteries: materials science studies will enable people to create cheaper batteries with more capacity. Somebody has to replace the Energizer Bunny – and back up datacenters.
Thakur also believes cosmological simulations will answer questions about dark matter and dark energy, the geometry of the universe, and why the universe's expansion rate is accelerating. (IBM's next supercomputer will be named Einstein). Molecular simulations will create better drugs faster.
Energy was a popular theme. Bronis de Supinski, co-leader of the Advanced Simulation and Computing program's Application Development Environment and Performance Team at Lawrence Livermore National Laboratory, favors the ability to better predict electricity demand on the grid. That means less wasted energy – and, perhaps, the ability to keep your computer from crashing in a power outage.
There were a few people predicting nuclear fission to solve the world's energy problems (and, presumably, put the oil companies out of business). It's just a matter of convincing people who still remember Three Mile Island.
A few had to put a damper on the rosy future, noting that there are still hurdles ahead. While de Supinski envisions better power grids, he also warns that the need for cheaper power and less dissipation en route will continue to present problems. That exascale computer you're designing may have to wait a few more years before you plug it in. Thakur agrees with that one.
De Supinski also believes memory bandwidth and capacity will continue to fall behind computational power until applications are severely limited by these bottlenecks. Thinking deep thoughts isn't much good if you can't remember them.
And funding will, of course, be a problem, says Thakur. Perhaps, with luck, the Sequester will be over within a few years.
And, perhaps, at SC13 one of the supercomputers will itself be making the predictions.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.