IDC Announces Winners of HPC Innovation Excellence Awards

June 24, 2014

LEIPZIG, Germany, June 24 — — International Data Corporation (IDC) today announced the seventh round of recipients of the HPC Innovation Excellence Award at the ISC’14 supercomputer industry conference in Leipzig, Germany. Prior winners were announced at the ISC’11, SC11, ISC’12, SC12, ISC’13, and SC13 supercomputing conferences.

The HPC Innovation Excellence Award recognizes noteworthy achievements by users of high performance computing (HPC) technologies. The program’s main goals are to showcase return on investment (ROI) and scientific success stories involving HPC; to help other users better understand the benefits of adopting HPC and justify HPC investments, especially for small and medium-size businesses (SMBs); to demonstrate the value of HPC to funding bodies and politicians; and to expand public support for increased HPC investments.

“IDC research has shown that HPC can accelerate innovation cycles greatly and in many cases can generate ROI. The award program aims to collect a large set of success stories across many research disciplines, industries, and application areas,” said Chirag Dekate, Research Manager, High Performance Computing at IDC. “The winners achieved clear success in applying HPC to greatly improve business ROI, scientific advancement, and/or engineering successes. Many of the achievements also directly benefit society.”

Winners of the first six rounds of awards, announced over the last three years, included 34 organizations from the U.S., three from the People’s Republic of China and Italy, four from UK, two from India, and one each from Australia, Canada, Sweden, South Korea, Switzerland, Germany, France, and Spain.

The new award winners and project leaders announced at ISC’14 are as follows (contact IDC for additional details about the projects):

  • University of Wisconsin-Madison (U.S.). University of Wisconsin Researchers utilized HPC resources in combination with multiple advanced forms of protein structure prediction algorithms and deep sequence data mining to construct a highly plausible capsid model for Rhinovirus-C (~600,000 atoms). The simulation model helps researchers in explaining why the existing pharmaceuticals don’t work on this virus. The modeling frameworks developed by the researchers provide angstrom-level predictions for new antivirals and a platform for vaccine development. Lead: Ann C. Palmenberg
  • Argonne National Laboratory, Caterpillar, Convergent Science (U.S.). Researchers from Argonne National Laboratory conducted one of the largest internal combustion engine simulations. Predictive internal combustion engine simulations necessitate very fine spatial and temporal resolutions, high-fidelity and robust two-phase flow, spray, turbulence, combustion, and emission models. The research has allowed Caterpillar Inc. to shrink their development timescales and thus result in significant cost savings. Caterpillar engineers predict that these HPC developments will reduce the number of multi-cylinder test programs by at least a factor of two, which will result in a cost saving of $500,000-$750,000 per year. Lead: Sibendu Som
  • CINECA (Italy). Engineers from THESAN srl, an Italian SME active in the renewable energy sector, teamed up with the Italian supercomputing center CINECA to develop simulation-driven engineering of hydroelectric turbines. The research was conducted in the framework of the PRACE SHAPE (SME HPC Adoption Programme in Europe) Initiative. The engineers and researchers built an HPC-based workflow to optimize the design of a new class of hydroelectric turbines. Using CFD Thesan could generate cost savings through reducing or eliminating the production of physical prototypes, better understanding the flaws of earlier design setups, and critically shortening the time to market. Lead: Raffaele Ponzini, Roberto Vadori, Giovanni Erbacci, Claudio Arlandini
  • Pipistrel d.o.o. (Slovenia). Engineers and scientists from Pipistrel utilized HPC and technical computing resources to design and develop the Taurus G4 aircraft. The aircraft was conceived, designed, and built in a mere 5 months, relying heavily on CAD and rapid prototyping techniques, but especially on the use of CFD and other computer aerodynamic tools for evaluation of flight performance and handling before committing to the building of the prototype. The aircraft introduced a unique twin fuselage configuration, presenting significant challenges in designing the wings, high lift systems, and the overall configuration. HPC-based CFD was used already in the conceptual design stage to optimize the shape of the engine nacelle in order to avoid premature flow separation. CFD was used in further stages of the design to optimize the high lift slotted flap geometry, and especially to determine the lift and stability behavior of the complete aircraft configuration in ground effect. Lead: Prof. Dr. Gregor Veble
  • Culham Centre for Fusion Energy, EPCC at the University of Edinburgh, York Plasma Institute at the University of York, and Lund University. Researchers from CCFE, EPCC and the Universities of York and Lund have made substantial recent optimizations for the well-known plasma turbulence code, GS2. This included a total rewrite of the routines that calculate the response matrices required by the code’s implicit algorithm, which has significantly accelerated GS2’s initialization, typically by a factor of more than 10. Taken together, these optimizations have vastly reduced wall time, as illustrated by the impressive gain in speed by a factor of 20 that was achieved for a benchmark calculation running on 8,192 cores. The optimized code achieves scaling efficiencies close to 50% at 4,096 cores and 30% at 8,192 cores for a typical calculation, compared to efficiencies of 4% and 2% respectively prior to these optimizations. Leads: David Dickinson, Adrian Jackson, Colin M Roach and Joachim Hein
  • Westinghouse Electric Company LLC, ORNL (U.S.). Researchers from Westinghouse Electric Company and the Consortium for Advanced Simulation of LWRs (CASL), a U.S. Department of Energy (DOE) Innovation Hub, performed core physics simulations of the AP1000 PWR startup core using CASL’s Virtual Environment for Reactor Application (VERA). These calculations, performed on the Oak Ridge Leadership Computing Facility (OLCF) “Titan” Cray XK7 system, produced 3D high-fidelity power distributions representing the conditions expected to occur during the AP1000 start-up. The set of results obtained provide insights that improve understanding of core conditions, helping to ensure safe startup of the AP1000 PWR first core. Lead: Fausto Franceschini (Westinghouse)
  • Rolls-Royce, Procter and Gamble, National Center for Supercomputing Applications, Cray Inc., Livermore Software Technology Corporation (U.S.). Researchers from NCSA, Rolls Royce, Proctor and Gamble, Cray Inc, and Livermore Software Technology Corporation were able to scale the commercial explicit finite element code, LS-DYNA, to 15,000 cores on Blue Waters. The research has potential to transform several industries including aerospace and automotive engine design, and consumer product development and design. Researchers cited that the increased scalability can result in significant cost savings. Lead: Todd Simons, Seid Koric.

IDC welcomes award entries from anywhere in the world. Entries may be submitted at any time by completing the brief form available at https://www.hpcuserforum.com/innovationaward/. New winners will be announced multiple times each year. Submissions must contain a clear description of the dollar value or scientific value received in order to qualify. The HPC User Forum Steering Committee performs an initial ranking of the submissions, after which domain and vertical experts are called on, as needed, to evaluate the submissions.

HPC Innovation Excellence Award sponsors include Adaptive Computing, Altair, AMD, Ansys, Cray, Avetec/DICE, the Boeing Company, the Council on Competitiveness, Department of Defense, Department of Energy, Ford Motor Company, Hewlett Packard, HPCwire, insideHPC, Intel, Microsoft, National Science Foundation, NCSA, Platform Computing, Scientific Computing, and SGI.

The next round of HPC Innovation Excellence Award winners will be announced at SC’14 in November 2014.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. IDC helps IT professionals, business executives, and the investment community to make fact-based decisions on technology purchases and business strategy. More than 1,000 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. In 2014, IDC celebrates its 50th anniversary of providing strategic insights to help clients achieve their key business objectives. IDC is a subsidiary of IDG, the world’s leading technology media, research, and events company. You can learn more about IDC by visiting www.idc.com.

Source: International Data Corporation

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This