HPC ROI: Invest a Dollar to Make $500-plus Reports IDC

By John Russell

November 18, 2015

Perhaps the most eye-popping numbers in IDC’s HPC market report presented yesterday at its annual SC15 breakfast were ROI figures IDC has been developing as part of a DOE grant. The latest results indicated even higher ROI than shown in the pilot study. On average, according to the latest data, $514.7 in revenue are returned for every dollar of HPC invested. But first, let’s dig into the overall market numbers.

After slumping in 2014, the worldwide HPC server market is poised to return to growth, reported IDC. A stronger than expected first half of 2015, said Earl Joseph, IDC’s Program Vice President for High-Performance Computing (HPC) and Executive Director of the HPC User Forum, forced an upward revision of its earlier forecast. IDC now projects the market to finish at $11.4B, up nearly 12 percent from 2014’s $10.2B

Except for supercomputers, which are in a cyclical ‘bumpy’ market, there is broad strength throughout the HPC market according to IDC. Storage is again the fastest growing segment. The total 2015 HPC market (see figures below) is forecast to be $22.1B. Among prominent trends cited by IDC are: top 10 system purchases has slowed for ~2 years; the IBM/Lenovo deal also delayed many purchases; first half of 2015 is up over 12%.

Key points from the IDC report:

  • Growing recognition of HPC’s strategic value is helping to drive high-end sales. Low-end buyers are back into a growth mode.
  • HPC vendor market share positions will shift greatly in 2015.
  • Recognition of HPC’s strategic/economic value will drive the exascale race, with 100PF systems in 2H 2015 and more in 2016. 20/30MW exascale systems will wait till 2022-2024.
  • The HPDA market will continue to expand opportunities for vendors.
  • Non-x86 processors and non-CPUs could alter the landscape – power, ARM, others; coprocessors, GPUs, FPGAs.
  • China looms large(r). Lenovo, growing domestic market, export intentions. Other Chinese vendors are planning to extend to Europe.
  • Growing influence of the data center in IT food chain will impact HPC technology options, perhaps providing new approaches.
  • HPC in the Cloud Gaining Traction and the big questions are how much and how soon.

Pain points, of course, remain. Software is the number one roadblock – better management software is needed; parallel software is lacking for most users; and many applications will need a major redesign to run in HPC environments Clusters also remain hard to use and manage. Power, cooling and floor space are major issues. There’s still a lack of support for heterogeneous environment and accelerators. Storage and data management are becoming new bottlenecks.

IDC.SC2015.HPC mkt

The ongoing collision of big data with HPC continues to force changes in the way IDC defines and monitors this emerging market. A while back IDC coined the term High Performance Data Analysis (HPDA). Joseph noted the convergence is ‘creating new solutions and adding many new users/buyers to the HPC space. The finance sector, for example, grew faster than “what we reporting over the last two years (by ~50% higher).”

Identifying the most appropriate buckets within the HPDA category has been an ongoing exercise. Currently, IDC singles out four verticals in HPDA:

  • Fraud and anomaly detection – This “horizontal” workload segment centers on identifying harmful or potentially harmful patterns and causes using graph analysis, semantic analysis, or other high performance analytics techniques.
  • Marketing – This segment covers the use of HPDA to promote products or services, typically using complex algorithms to discern potential customers’ demographics, buying preferences and habits.
  • Business intelligence – The workload segment uses HPDA to identify opportunities to advance the market position and competitiveness of businesses, by better understanding themselves, their competitors, and the evolving dynamics of the markets they participate in.
  • Other Commercial HPDA – This catchall segment includes all commercial HPDA workloads other than the three just described. Over time, IDC expects some of these workloads to become significant enough to split out, i.e. the use of HPDA to manage large IT infrastructures, and Internet-of-Things (IoT) infrastructures.

The next new HPDA segment, said Joseph, will be precision medicine. One example is what’s called outcomes-based medical diagnosis and treatment planning. In this paradigm, a patient’s history and symptomology are in a database. While a patient is still in the office, health physicians could sift through millions of archived patient records for relevant outcomes. The care provider IDC.SC15.HPDAconsiders the efficacies of various treatments for “similar” patients but is not bound by the findings. In effect this is potentially a powerful decision-support tool.

Public and private payer organizations have long promoted the development of similar evidence-based medicine approaches in which whole populations of patients’ records could be examined and evaluated to determine which drugs and therapies are worthwhile and should be approved. In theory, the result in both instances would be better outcomes and reduction of costly outlier practices.

There’s too much material to comprehensively review IDC’s full report. For example, cloud-based HPC computing is on the rise, both in sheer volume – from 13.8% of sites in 2011, to 23.5% in 2013, to 34.1% in 2015 – and number of workloads being run. It’s probably worth a look at a bit more of the ROI data. Three approaches were used: ROI based on revenues generated (similar to GDP) divided by HPC investment; ROI based on profits generated (or costs saved) divided by HPC investment; ROI based on jobs created.

IDC.SC15.ROIAt the breakfast, IDC also announced the ninth round of recipients of the HPC Innovation Excellence Award at the SC15 supercomputer industry conference in Austin, Texas. This year, winners come from around the globe: US-based Argonne National Laboratory, the Korean Institute of Science and Technology from the ROK, and a recent start-up, Sardinia Systems from Estonia.

The HPC Innovation Excellence Award recognizes noteworthy achievements by users of high performance computing (HPC) technologies. The program’s main goals are to showcase return on investment (ROI) and scientific success stories involving HPC; to help other users better understand the benefits of adopting HPC and justify HPC investments, especially for small and medium-size businesses (SMBs); to demonstrate the value of HPC to funding bodies and politicians; and to expand public support for increased HPC investments.

“IDC research has shown that HPC can accelerate innovation cycles greatly and in many cases can generate ROI. The award program aims to collect a large set of success stories across many research disciplines, industries, and application areas,” said Bob Sorensen Research Vice President, High Performance Computing at IDC. “The winners achieved clear success in applying HPC to greatly improve business ROI, scientific advancement, and/or engineering successes. Many of the achievements also directly benefit society.”

The new award winners are:

  • Argonne National Laboratory (US) developed ACCOLADES, a scalable workflow management tool that enables automotive design engineers to exploit the task parallelism using large-scale computing (e.g., GPGPUs, multicore architectures, or the cloud). By effectively harnessing such large-scale computing capabilities, a developer can concurrently simulate the drive cycle of thousands of vehicles in the wall-time it normally takes to complete a single empirical dyno test. According to experts from a leading automotive manufacturer, ACCOLADES in conjunction with dyno tests can greatly accelerate the test procedure yielding an overall saving of $500K-1 M during the R&D phase of an engine design/development. Lead: Shashi Aithal and Stefan Wild
  • Korea Institute of Science and Technology (ROK) runs a modeling and simulation program that offers Korean SMEs the opportunity to develop high-quality products using the supercomputers. Through open calls, the project selects about 40 engineering projects of SMEs every year and provides technical assistance, access to the supercomputers, and the appropriate modeling and simulation software technology such as CFD, FEA, etc. From 2004 to date, the project has assisted about 420 SMEs. The project recently assisted the development a slow juicer made by NUC Corp. by improving the juice extraction rate from 75% to 82.5% through numerical shape optimization of a screw by using the “Tachyon II” supercomputer and fluid/structural analysis. As a result, sales dramatically increased from about $1.9 million in 2010 to $96 million 2014, and the company has hired 150 new employees through building the additional manufacturing lines. Lead Jaesung Kim
  • Sardinia Systems (Tallinn, Estonia) developed a technology that automates HPC operations in large scale cloud data centers, such as collecting utilization metrics, driving scalable aggregation and consolidation of data, and optimizing resource demand to resource availability. The product, FishDirector, incorporates high performance parallel data aggregation and consolidation, coupled with high performance solvers which continuously solve for optimal layout of VMs across an entire compute facility, taking into account costs such as virtual machines (VM) movement/migration costs, constraints around placement of certain VMs, to drive higher overall server utilization and lower energy consumption. The firm states that it has demonstrated raising utilization from 20 percent to over 60 percent at one government facility by optimizing the performance of over 150,000 VMs. Lead: Kenneth Tan

The next round of HPC Innovation Excellence Award winners will be announced at ISC16 in June 2016.

editorialfeature

 

 

 

 

http://www.hpcwire.com/2015-supercomputing-conference/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Natural Gas, Precision Agriculture, Neural Networks and More

December 6, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced computing technologies for the AI and exascale era. "Over th Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has now encompassed CPUs offered by the leading public cloud serv Read more…

By Doug Black

Medical Imaging Gets an AI Boost

December 3, 2019

AI technologies incorporated into diagnostic imaging tools have proven useful in eliminating confirmation bias, often outperforming human clinicians who may bring their own prejudices. Another issue slowing progress is t Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

AI Needs Intelligent HPC infrastructure

Artificial Intelligence (AI) has revolutionized entire industries and enables humanity to solve some of the most daunting challenges. To accomplish this, it requires massive amounts of data from heterogeneous sources that is processed it new ways that differs significantly from HPC applications. Read more…

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science itself. At SC19, Steve Squyres’ opening keynote recounting th Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science its Read more…

By John Russell

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

Tsinghua University Racks Up Its Ninth Student Cluster Championship Win at SC19

November 27, 2019

Tsinghua University has done it again. At SC19 last week, the eight-time gold medal-winner team took home the top prize in the 2019 Student Cluster Competition Read more…

By Oliver Peckham

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

How the Gordon Bell Prize Winners Used Summit to Illuminate Transistors

November 22, 2019

At SC19, the Association for Computing Machinery (ACM) awarded the prestigious Gordon Bell Prize to the Swiss Federal Institute of Technology (ETH) Zurich. The Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This