IDC Announces Winners of HPC Innovation Excellence Awards

November 19, 2013

DENVER, Colo., Nov. 19 — International Data Corporation (IDC) today announced the sixth round of recipients of the HPC Innovation Excellence Award at the ISC’13 supercomputer industry conference in Leipzig, Germany. Prior winners were announced at the ISC’11, SC’11, ISC’12, SC’12, and ISC’13 supercomputing conferences.

The HPC Innovation Excellence Award recognizes noteworthy achievements by users of high performance computing (HPC) technologies. The program’s main goals are: to showcase return on investment (ROI) and scientific innovation success stories involving HPC; to help other users better understand the benefits of adopting HPC and justify HPC investments, especially for small and medium-size businesses (SMBs); to demonstrate the value of HPC to funding bodies and politicians; and to expand public support for increased HPC investments.

“IDC research has shown that HPC can impact innovation cycles greatly and can potentially generate ROI. The award program aims to collect a large set of success stories across many research disciplines, industries, and application areas,” said Chirag Dekate, Research Manager, High-Performance Systems at IDC. “The winners achieved clear success in applying HPC to greatly improve business ROI, scientific advancement, and/or engineering successes. Many of the achievements also directly benefit society.”

Winners of the first five rounds of awards, announced in 2011, 2012 and at ISC’13, included 29 organizations from the U.S., 3 each from Italy and the People’s Republic of China, 2 each from India and the UK, and 1 each from Australia, Canada, Spain, and Sweden.

The new award winners and project leaders announced at ISC’13 are as follows (contact IDC for additional details about the projects):

  • GE Global Research (U.S.) Using a 40 million CPU hour Department of Energy award, GE Global Research has modeled the freezing behavior of water droplets on six different engineered surfaces under six operating conditions on the hybrid CPU/GPU Titan at Oak Ridge National Lab (ORNL). Through recent advances in the field, including a joint simulation enhancement effort with Oak Ridge National Lab to fully leverage hardware infrastructures, GE Global Research has been able to accelerate simulations by approximately 200-fold compared to even just two years ago. Lead: Masako Yamada
  • The Procter & Gamble Company (U.S.) P&G researchers and collaborators at Temple University developed models at the molecular and mesoscale level to understand complex molecular interactions of full formula consumer products such as shampoos, conditioners, facial creams, laundry detergents, etc. The HPC-driven research helped shed light on the performance of the complex formula interactions versus inferring performance based on isolated calculations. Results of the HPC -driven research led to a better understanding of interfacial phenomena, phase behavior, and the performance of several P&G products. Lead: Kelly L. Anderson
  • National Institute of Supercomputing and Networking, Korea Institute of Science and Technology Information (Korea) The EDISON (EDucation and research Integration through Simulation On the Net) Project, funded by the Ministry of Science, ICT and Future Planning, Korea, established an infrastructure on the Web where users across the country could easily access and utilize various engineering/science simulation tools for educational and research purposes. The EDISON project is accelerating research in five key areas: Computational Fluid Dynamics, Computational Chemistry, Nano Physics, Computational Structural Dynamics, and Multi-disciplinary Optimization. The Project utilizes a novel partnership model between the project and the respective domains to develop area-specific simulation tools that make HPC accessible to domain specialists. Lead: Kumwon Cho
  • GE Global Research (U.S.) GE Global Research’s work on Large Eddy Simulations (LES) leveraged petascale computing to break barriers in accurately characterizing the key flow physics of multi-scale turbulent mixing in boundary layer and shear flows. Findings from this research will significantly improve the prediction and design capabilities for next-generation aircraft engines and wind turbines, both from demonstrating the viability of LES as a characterization tool and as a source of physics guidance. Lead: Umesh Paliath
  • Spectraseis Inc (U.S.) and CADMOS, University of Lausanne (Switzerland) Researchers doubled both acoustic and elastic solver throughput, at the same time improving code size and maintainability, harnessing the massive parallel computing capabilities of Fermi and Kepler GPUs. With improved efficacies obtained by code redesign for GPU the time to solution was reduced from hours to seconds. The improved capability allowed Spectraseis to move from 2D to 3D and, in several cases, obtain more than 100x speed-up. Lead: Igor Podladtchikov and Yury Podladchikov
  • Intelligent Light (U.S.) Intelligent Light addressed the challenge of high volumes of CFD data using FieldView 14 data management and process automation tools. Intelligent Light contributed results from approximately 100 cases with more than 10,000 time steps each to deliver a complete response to the workshop objectives. A Cray XE6 was used to generate the CFD solutions and perform much of the post-processing. This project successfully demonstrated the value and practicality of using innovative workflow engineering with automation and data management for complex CFD studies. Lead: Dr. Earl P.N. Duque
  • Facebook (U.S.) Facebook manages a social graph that is composed of people, their friendships, subscriptions, and other connections. Facebook modified Apache Giraph to allow loading vertex data and edges from separate sources (GIRAPH-155). Facebook was able to run an iteration of page rank on an actual one trillion edge social graph formed by various user interactions in fewer than four minutes with the appropriate garbage collection and performance tuning. Facebook can now cluster a monthly active user data set of one billion input vectors with 100 features into 10,000 centroids with k-means in less than 10 minutes per iteration.Lead: Avery Ching / Apache Giraph
  • HydrOcean/Ecole Centrale Nantes (France) SPH-flow is an innovative fluid dynamic solver based on a meshless, compressible, and time-explicit approach. SPH-flow solver has been used in several industrial projects, including: impact forces of aircraft and helicopter ditching; free surface simulations of ship wake and wave fields; multiphase emulsion simulations; extreme wave impacts on structures; simulation of hydroplaning of tires; water film around car bodies; and underwater explosions. This project is lead by Dr. Erwan Jacqin, CEO of HydrOcean, a spinoff from Ecole Centrale fluid dynamic lab, and Prof. David Le Touze, in charge of the SPH-flow research team at Ecole Centrale Nantes.
  • Imperial College London and NAG (UK) HPC experts from NAG and Imperial College London have implemented scientifically valuable new functionality and substantial performance improvements in the Incompact3D application. After the improvements, the simulations can now scale to 8000 cores efficiently, with a run time of around 3.75 days (wall-clock time), which is over 6x faster. Furthermore, meshes for new high resolution turbulence mixing and flow control simulations, which use up to 4096*4096*4096 grid points, can now utilize as many as 16384 cores. Lead: NAG HECToR CSE Team
  • Queen Mary University of London and NAG (UK) NAG and Queen Mary University of London made significant improvements to CABARET (Compact Accurate Boundary Adjusting high Resolution Technique) code so that it may be used to solve the compressible Navier-Stokes equations and, in the context of this project, for the investigation of aircraft noise. The newly developed code was validated and tested against the serial code and a parallel efficiency of 72% was observed when using 250 cores of the XT4 part of HECToR with the quad core architecture. Lead: NAG HECToR CSE Team
  • Southern California Earthquake Center (U.S.) SCEC has built a special simulation platform, CyberShake, which uses the time-reversal physics of seismic reciprocity to reduce the computational cost by 1000x. Additionally, the production time for a complete regional CyberShake model at seismic frequencies up to 0.5 Hz has been reduced by 10x, and four new hazard models have been run on NCSA Blue Waters and TACC Stampede. SCEC researchers have developed highly parallel, highly efficient CUDA-optimized wave propagation code, called AWP-ODC-GPU, that achieved a sustained performance of 2.8 Petaflops on ORNL Titan. LEAD: Southern California Earthquake Center Community Modeling Environment Collaboration
  • Princeton University/Princeton Plasma Physics Laboratory (U.S.) Using high-end supercomputing resources, advanced simulations of confinement physics for large-scale MFE plasmas have been carried out for the first time with very high phase-space resolution and long temporal duration to deliver important new scientific insights. This research was enabled by the new GTC-P code, developed to use multi-petascale capabilities on world-class systems such as the IBM BG-Q “Mira” @ ALCF and “Sequoia” @ LLNL. Leads: William Tang, Bei Wang, and Stephane Ethier
  • Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory (U.S.) Researchers at ORNL have used the Titan supercomputer to perform the first simulations of organic solar cell active layers at scales commensurate with actual devices. By modifying the LAMMPS molecular dynamics software to use GPU acceleration on Titan, the researchers were able to perform simulations to study how different polymer blends can be used to alter the device morphology. The new insights will aid in the rational design of cheap solar cells with higher efficiency. Results are published in the journal Physical Chemistry Chemical PhysicsLead: W. Michael Brown and Jack C. Wells
  • Ford Werke GmbH (Germany) Researchers at Ford Werke GmbH have developed and deployed a new CAE process, which enables the optimization of the airflow through the cooling package of a vehicle using complex 3D CFD analysis. The Ford team also demonstrated it could run these complex simulations fast enough to enable their use within the time constraints of a vehicle development project. The team’s work on Jaguar at Oak Ridge National Lab will help Ford maximize the effectiveness and fuel efficiency of engine bay designs throughout the company. Lead: Dr. Burkhard Hupertz and Alex Akkerman

IDC welcomes award entries from anywhere in the world. Entries may be submitted at any time by completing the brief form available at https://www.hpcuserforum.com/innovationaward/. New winners will be announced multiple times each year. Submissions must contain a clear description of the dollar value or scientific value received in order to qualify. The HPC User Forum Steering Committee performs an initial ranking of the submissions, after which domain and vertical experts are called on, as needed, to evaluate the submissions.

HPC Innovation Excellence Award sponsors include Adaptive Computing, Altair, AMD, Ansys, Cray, Avetec/DICE, the Boeing Company, the Council on Competitiveness, Department of Defense, Department of Energy, Ford Motor Company, Hewlett Packard, HPCwire, insideHPC, Intel, Microsoft, National Science Foundation, NCSA, Platform Computing, Scientific Computing, and SGI.

The next round of HPC Innovation Excellence Award winners will be announced at ISC’14 in June 2014.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. IDC helps IT professionals, business executives, and the investment community to make fact-based decisions on technology purchases and business strategy. More than 1,000 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. For more than 49 years, IDC has provided strategic insights to help our clients achieve their key business objectives. IDC is a subsidiary of IDG, the world’s leading technology media, research, and events company. You can learn more about IDC by visiting www.idc.com.

—–

Source: IDC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data West Brings Technology Leaders to SDSC

December 6, 2018

Data and technology enthusiasts from around the world descended upon the San Diego Supercomputing Center (SDSC) for the third annual Data West conference, which is taking place this week on the campus of the University o Read more…

By Alex Woodie

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are Read more…

By James Reinders

What’s New in HPC Research: Automatic Energy Efficiency, DNA Data Analysis, Post-Exascale & More

December 6, 2018

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

Five Steps to Building a Data Strategy for AI

Our data-centric world is driving many organizations to apply advanced analytics that use artificial intelligence (AI). AI provides intelligent answers to challenging business questions. AI also enables highly personalized user experiences, built when data scientists and analysts learn new information from data that would otherwise go undetected using traditional analytics methods. Read more…

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar conc Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

AWS Debuts Lustre as a Service, Accelerates Data Transfer

November 28, 2018

From the Amazon re:Invent main stage in Las Vegas today, Amazon Web Services CEO Andy Jassy introduced Amazon FSx for Lustre, citing a growing body of applicati Read more…

By Tiffany Trader

AWS Launches First Arm Cloud Instances

November 28, 2018

AWS, a macrocosm of the emerging high-performance technology landscape, wants to be everywhere you want to be and offer everything you want to use (or at least Read more…

By Doug Black

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

DOE Under Secretary for Science Paul Dabbar Interviewed at SC18

November 21, 2018

During the 30th annual SC conference in Dallas last week, SC18 hosted U.S. Department of Energy Under Secretary for Science Paul M. Dabbar. In attendance Nov. 13-14, Dabbar delivered remarks at the Top500 panel, met with a number of industry stakeholders and toured the show floor. He also met with HPCwire for an interview, where we discussed the role of the DOE in advancing leadership computing. Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This