GCS Assigns 753.6 Million Computing Core Hours to National Science Projects

November 13, 2013

BERLIN, Germany, Nov. 13 —  National scientists and researchers’ demand for computing time on the high performance computing systems of the Gauss Centre for Supercomputing (GCS) continues to be unabated. The 10th GCS Call for Large- Scale Projects, which was open from July 30 to August 30, 2013 resulted in a record amount of computing time granted to ambitious German computational science and emngineering projects: The total number of 753.58 million computing core hours assigned means the largest grant of computing time ever allocated by the GCS Steering Committee. The projects awarded access to the vast GCS supercomputing resources come from a wide array of scientific fields including Astrophysics, Chemistry, High Energy Physics, and Scientific Engineering.

From the 19 applications submitted, a total of 13 national computational science projects met the strict GCS large-scale project qualification criteria and were awarded with the highly coveted computing time on the GCS high performance computing (HPC) systems. The TOP5 individual allotments of computing core hours were granted to the following outstanding projects:

Astrophysics:

• Magneticum Dr. Klaus Dolag, Ludwig-Maximilians-Universität München 45M core hours on SuperMUC of Leibniz Supercomputing Centre Garching (LRZ)

Chemistry:

•Mechanochemistry of Covalent Bond Breaking from First Principles Simulations Prof. Dr. Dominik Marx, Ruhr-Universität Bochum 64.9M core hours on JUQUEEN of Jülich Supercomputing Centre (JSC)

High Energy Physics:

•Lattice QCD with Wilson Quarks at Zero and Non-Zero Temperature Prof. Dr. Hartmut Wittig, Johannes Gutenberg-Universität Mainz 70M core hours on JUQUEEN of Jülich Supercomputing Centre (JSC)

•2+1+1 Lattice QCD Calculations with Hex Smeared Clover Fermions Prof. Dr. Zoltan Fodor, Bergische Universität Wuppertal 65M core hours on JUQUEEN of Jülich Supercomputing Centre (JSC)

Scientific Engineering:

•LAMTUR: Investigation of Laminar-Turbulent Transition and Flow Control in Boundary Layers – Prof. Dr.-Ing. Ulrich Rist, IAG, Universität Stuttgart 125M core hours on Hermit of High Performance Computing Center Stuttgart (HLRS)

The 13 approved large-scale projects are distributed between the three GCS HPC systems Hermit of HLRS, JUQUEEN of JSC, and SuperMUC of LRZ. All three GCS systems provide computing performance in the Petaflops-range (1 Petaflops = 1 Quadrillion floating point operations per second or: a 1 with 15 zeros) and are of complementary system design and architecture to optimally respond to the needs of the researchers, developers, and engineers. For the large-scale projects of the 10th GCS call, access to computing resources and support is granted for a time period of 12 months.

“We are very happy to see that there is a steady rise in the demand for computing time on our HPC systems,” comments Prof. Dr.-Ing. Siegfried Wagner, Chairman of the GCS Steering Committee. “GCS offers world-class HPC resources to aid in scientific computing, and this is reflected in the quality of the projects our system infrastructure is being used for. Only a couple of years ago, the now supported projects would have been impossible to accommodate as they exceeded the then available GCS resources in all aspects: the infrastructure, the software and the HPC expertise. I am proud to say that meanwhile GCS has achieved the favourable position to serve projects of this magnitude,” states Prof. Wagner who points out that, like with previous calls, GCS unfortunately could not entirely fulfil the research community’s ever increasing demand for computing power. With the 10th GCS call, almost 1.5 billion computing core hours had been requested yet only half of it– 753.6 million core hours–could be granted, primarily for lack of computing resources.

Computing time allocations for GCS Large-Scale Projects are dispersed based on scientific criteria and their technical feasibility through independent reviewers in a peer-review process led by the GCS Steering Committee. The complete list of approved GCS Large Scale Projects (10th Call) can be found at http://www.gauss-centre.eu/gauss- centre/EN/Projects/LargeScaleProjects/10th-call.html

About GCS Large Scale Projects

Per the mission of the Gauss Centre for Supercomputing, all scientists and researchers in Germany have access to the petascale HPC systems of Germany’s leading supercomputing institution. Projects are classified as “large-scale” if they require more than 35 mio. core-hours in one year on a GCS member centre’s high-end system. Computing time on the GCS systems is allocated by the GCS Steering Committee to scientifically leading, ground-breaking projects which deal with complex, demanding, and innovative simulations that would not be possible without the GCS petascale infrastructure. The projects are evaluated via a strict peer-review process on the basis of the project’s scientific and technical excellence.

The GCS Calls for Large-Scale Projects application procedure and criteria for decision is described in detail at http://www.gauss-centre.eu/gauss- centre/EN/HPCservices/HowToApply/LargeScaleProjects/largeScaleProjects_node.html

About GCS

The Gauss Centre for Supercomputing (GCS) combines the three national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s Tier-0 supercomputing institution. Concertedly, the three centres provide the largest and most powerful supercomputing infrastructure in all of Europe to serve a wide range of industrial and research activities in various disciplines. They also provide top-class training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advance Computing in Europe), an international non- profit association consisting of 25 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level.

—–

Source: Gauss Centre for Supercomputing

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Do Cryptocurrencies Have a Part to Play in HPC?

February 22, 2018

It’s easy to be distracted by news from the US, China, and now the EU on the state of various exascale projects, but behind the vinyl-wrapped cabinets and well-groomed sales execs are an army of Excel-wielding PMO and Read more…

By Chris Downing

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource managed by the institution’s Advanced Center for Computing and C Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Extreme Performance Solutions

Experience Memory & Storage Solutions that will Transform Your Data Performance

High performance computing (HPC) has revolutionized the way we harness insight, leading to a dramatic increase in both the size and complexity of HPC systems. Read more…

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource manage Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This