Visit additional Tabor Communication Publications
November 04, 2010
Prototyped systems expected to be complete by 2018
Nov. 4 -- The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, will provide expertise to a multi-year technology investment program to develop the next generation of extreme scale supercomputers.
The project is part of the Ubiquitous High Performance Computing (UHPC) program, run by the Defense Advanced Research Projects Agency (DARPA), part of the U.S. Department of Defense. Intel Corporation leads one of the winning teams in the program, and is working closely with SDSC researchers on applications.
The first two phases of the project extend into 2014, and a full system design and simulation is expected at the completion of those phases in 2014. Phases 3 and 4 of the project, which have not yet been awarded, are expected to result in a full prototype system sometime in 2018.
During the first phases of the award, SDSC's Performance Modeling and Characterization (PMaC) laboratory will assist the Intel-DARPA project by analyzing and mapping strategic applications to run efficiently on Intel hardware. Applications of interest include rapid processing of real-time sensor data, establishing complex connectivity relationships within graphs (think of determining "six degrees of Kevin Bacon" relationships on Facebook), and complex strategy planning.
Energy consumption at extreme scales is one of the formidable challenges to be taken on by the Intel team. Today's top supercomputers operate at the petascale level, which means the ability to perform one thousand trillion calculations per second. The next level is exascale, or achieving computing speeds of one million trillion calculations per second -- one thousand times faster than today's machines.
According to Intel, the project will focus on new circuit topologies, new chip and system architectures, and new programming techniques to reduce the amount of energy required per computation by two to three orders of magnitude. That means such extreme scale systems will have to require 100 to 1,000 times less energy per computation than what today's most efficient computing systems consume.
"We are working to build an integrated hardware/software stack that can manage data movement with extreme efficiency," said Allan Snavely, associate director of SDSC and head of the supercomputer center's PMaC lab. "The Intel team includes leading experts in low-power device design, optimizing compilers, expressive program languages, and high-performance applications, which is PMaC's special expertise."
According to Snavely, all these areas must work in a coordinated fashion to ensure that one bit of information is not moved further up or down the memory hierarchy than need be.
"Today's crude and simplistic memory cache and prefetch policies won't work at the exascale level because of the tremendous energy costs associated with that motion," he said. "Today it takes a nano joule (a billionth of a joule, a joule being the amount of energy needed to produce one watt of power for one second) to move a byte even a short distance. Multiply that byte into an exabyte (one quintillion bytes) and one would need a nuclear plant's worth of instantaneous power to move it based on today's technology."
Intel's other partners for the project include top computer science and engineering faculty at the University of Delaware and the University of Illinois at Urbana-Champaign, as well as top industrial researchers at Reservoir Labs and ET International.
DARPA's UHPC program directly addresses major priorities expressed by President Obama's "Strategy for American Innovation," according to a DARPA release issued earlier this month. These priorities include exascale supercomputing Century "Grand Challenge," energy-efficient computing, and worker productivity. The resulting UHPC capabilities will provide at least 50 times greater energy, computing and productivity efficiency, which will slash the time needed to design and develop complex computing applications.
San Diego Supercomputer Center (SDSC)
SDSC's PMaC Lab
Source: San Diego Supercomputer Center
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.