Visit additional Tabor Communication Publications
October 08, 2010
Staying at the forefront of computational science takes continuously increasing computing power. And the University of Stuttgart’s High Performance Computing Center (HLRS) has taken an innovative approach to making sure they stay in that lead.
First, HLRS forged partnerships with academia, government and commercial research organizations. And then they took advantage of the latest advances in high-performance computing in the form of the powerful and affordable Cray XT5m supercomputer. The combination of shared supercomputing and brain power has facilitated researchers in making scientific advances more quickly and at lower cost than ever before.
HLRS is just one part of a triumvirate. The other two partnerships are the High Performance Computing Center for Academia and Industry (HWW) and the Automotive Simulation Center/Stuttgart(ASCS). HSS is a partnership of academia, government and business groups who use supercomputing cycles for scientific visualization, computational fluid dynamics, physics, etc. ASCS is focused on common technical or scientific problems in the automotive industry and includes automakers, suppliers, engineering and scientific software developers and hardware vendors.
The three organizations and the members they’re comprised of share varying percentages of a Cray XT5m supercomputer (housed at the University of Stuttgart). The system starts around $500,000 but incorporates the hardware and software advancements of the Cray XT5 supercomputer – the basis of the petascale system currently in use at the U.S. Department of Energy’s Oak Ridge National Laboratory. Key technical capabilities that help lower the total cost of ownership include AMD Opteron “best-of-class” standard x86 processors and Cray’s SeaStar interconnect technology – both of which are fully upgradeable.
And the HLRS partners have been maximizing their supercomputing capability by taking a highly collaborative approach.
For example, ASCS partners are collaborating on multidisciplinary optimization. That means they’re correlating the multitude of design and engineering parameters that go into making a car as efficient as possible. Engineers at the industrial partners provide goals and requirements for the project; HLRS and other research partners then create mathematical modeling and algorithms, conduct experiments and validate the models; ISVs implement the code into their software and pass it back to HLRS for validation; and finally, the code goes to the industrial partners for use.
Previously, it would have been too expensive for any one of the participants to work on their part of the cycle, but by working in concert everyone benefits. And a key component of the cost reduction is the Cray XT5m supercomputer.
“Because the system is less expensive than other supercomputing systems each of the simulations costs less to run,” says Michael Resch, director of HLRS.”Simulation is part of the design phase and the earlier in the design process you can conduct complex simulations, the easier it is to avoid errors in the manufacturing phase. For the automotive industry here in Stuttgart that becomes a competitive advantage.”
Ultimately, Resch says, this iterative relationship helps identify potential advances in numerical and computational methods and gets those advances incorporated into ISVs’ code.
For a closer look at HLRS and their Cray XT5m system, download the AMD case study here.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.