Visit additional Tabor Communication Publications
February 21, 2013
BLOOMINGTON, Ind., Feb. 21 — Indiana University's Center for Research in Extreme Scale Technologies (CREST) is the recipient of a three-year, $1.1 million grant from the Department of Energy (DOE) to develop software that improves the speed and programmability of supercomputers. This funding is part of a $7.05 million grant for the XPRESS (eXascale PRogramming Environment and System Software) project, led by Sandia National Laboratories as part of the DOE Office of Science Advanced Scientific Computing Research X-Stack program.
IU created CREST in 2011 as part of the Pervasive Technology Institute to pioneer research at the frontiers of exascale computing. Two of supercomputing's foremost thinkers, Andrew Lumsdaine and Thomas Sterling, both professors in the School of Informatics and Computing at IU Bloomington, lead CREST as director and associate director, respectively. Sterling also serves as CREST chief scientist.
The grant will fund CREST researchers to create a class of software that enables supercomputers to run intelligently. "We're writing software that moves execution from static to dynamic, allowing supercomputers to use new information as it is being revealed," said Sterling. "By doing so, supercomputers will 'think' about how they use their resources, as well as where and when they schedule various concurrent tasks."
As an analogy, Sterling noted the difference between a cannon and guided missile -- the missile makes minute adjustments during flight in order to more accurately hit the target. "Essentially, we're building a guided computer," said Sterling. "Our goal is to completely redesign the system software in order to produce a revolutionary class of supercomputers. It is exciting that IU will be at the forefront of such research, setting future directions for exascale computing and programming."
How fast is this next generation of supercomputers? Consider this: Today's fastest supercomputers perform about 10 quadrillion (one million billion) operations per second. By 2020, experts predict that exascale computers will perform one quintillion (one billion billion) calculations per second. However, it is not just speed that interests IU researchers—they are ultimately seeking to change how supercomputing works.
"This grant allows us to help scientists and engineers run their software across millions of processors," said Lumsdaine. "It's exciting to be able to advance next-generation supercomputing while supporting research into solutions for civilization's biggest issues."
Led by Sandia in partnership with IU, the XPRESS project involves eight academic and government institutions. Sterling is the project's chief scientist, while Lumsdaine is principal investigator on IU's portion of the grant.
Indiana University's Center for Research in Extreme Scale Technologies (CREST) is the newest research center affiliated with the Pervasive Technology Institute. CREST's mission is to transform dynamic data-driven computing through the development and application of revolutionary high performance computing platforms.
In 2008, IU established the Pervasive Technology Institute (PTI) through a $15 million grant from the Lilly Endowment. PTI is dedicated to the development and delivery of innovative information technology to advance research, education, industry and society.
About extreme scale and exascale computing
Today's most powerful computers consist of over a million processor cores and deliver performance on the order of 10 petaFLOPS (equivalent to about a million laptop computers). Exascale computers will offer nearly 100 times that performance for real-world applications like climate modeling, microbiology, nuclear reactor design, combustion and mechanical deformation. Extreme scale computing not only includes exascale computing, but also delivers performance gains for problems that today cannot scale anywhere near the maximum system size available.
Source: Indiana University
During a conversation this week with Cray CEO, Peter Ungaro, we learned that the company has managed to extend its reach into the enterprise HPC market quite dramatically--at least in supercomputing business terms. With steady growth into these markets, however, the focus on hardware versus the software side of certain problems for such users is....
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.