The HPC to Enterprise Infrastructure Leap

By Nicole Hemsoth

February 24, 2014

As more companies feel the burdens of growing data demands in terms of volume and complexity—not to mention the need to derive results on such data quickly and efficiently—the chasm between what was once considered mainstream enterprise computing and “traditional” high performance computing is  is narrowing.

As we’ve addressed in other parts of this special series on lessons that HPC can carry into a growing array of enterprise application areas, including those that have a range of defined “big data” problems, this merging of HPC and commercial computing has been underway with increasing veracity over the last few years in particular—directly in line with momentum around the many data movement, ingestion and processing, memory, efficiency and other challenges enterprise users face.

While HPC has always had a foothold in key commercial segments (financial services, oil and gas, government, etc.) the technologies that were once reserved for these large-scale commercial areas are filtering down to a wider base of existing enterprise entities. It’s not uncommon lately (in the wake of the hubbub around big data) to hear about insurance companies, web retailers, content and media companies and others taking notice of HPC technologies in new ways.  Bill Mannel, General Manager of Compute Servers at SGI echoed this following a conversation about this HPC to enterprise leap, noting, “Key lessons that commercial and enterprise datacenters can take away from HPC is that infrastructure matters based upon your application, your data, and the quality of service expectations of customers.”

While many won’t disagree with that point, for those with complex applications, infrastructure has to matter in different ways than it used to. As Cray’s VP of Storage and Data Management, Barry Bolding told us, one of the most important lessons for the commercial segments is productive scalability. “The commercial/enterprise space understands productive virtualization, which is a type of scaling that improves utilization of resources. The area of productive scaling that HPC brings to the table is efficient, productive scalability for complex systems.  Scaling to fit an HPC solution in the coming years will require efficient parallel computing (both HW and SW), efficient parallel storage (to ensure no data access bottlenecks) and scalable analytics.

Bolding says the enterprise is seeing more and more applications needs that fit this model of parallel compute, storage and analytics.  The energy sector is using new, complex algorithms to do oil and gas exploration and productive scalability is key to meeting their needs.  In this example parallel, scalable storage and compute are the core to solving the problems efficiently.

Another key lesson that HPC can bring to bear is adaptive technologies, he says, noting that “for maximum efficiency and TCO it is critical to match the application need to the appropriate underlying technology. This is contrary to the cloud model where little effort is made to match the underlying technology to the application.”

When asked about the infrastructure leap from HPC to enterprise, Paul Dlugosch of Micron explained that “It is the HPC industry that first meets the most critical and difficult problems encountered in scientific and technical computing and it is true that innovations in the HPC industry often trickle down into mainstream use in commercial/enterprise datacenters.” In some cases, he says, the innovations can migrate all the way down to the client or consumer space.” In short, although the HPC industry operates at the top of this hierarchy of compute capability, there are “lessons learned in the HPC industry that have practical application throughout the entire spectrum of compute capability.”

While performance remains an important metric, Dlugosch says a myopic focus on performance can lead towards the top of a pyramid where the performance crown may be acquired but the overall market for the technology developed might becomes proportionately smaller. “When performance is the only objective, important opportunities may be missed. A good example would be the disruption imparted on high performance microprocessor vendors by the emerging need for lower power processors where less compute performance was an acceptable trade off. The lesson here, of course, is that focus on high performance may miss very important innovations that are not based on processing performance.”

Performance does indeed drive all aspects of the computing industry, but a sole focus on compute performance can leave a business vulnerable, argues Dlugosch. While the HPC industry can better afford a concentrated focus on compute performance, this does not extend to other segments of the computing industry where performance is only one of several metrics that will determine overall success.

One other area where HPC and enterprise users can connect is in the realm of risk adversity, says Dlugosch. As he explained in a detailed interview:

The old adage that ‘nobody ever got fired for buying IBM’ reflects this point quite well.  Of course, IBM in this case is a proxy for any well established, mature and stable technology provider.  While it may be true that nobody gets fired for buying tried and true technology, entire businesses can fail because they did not recognize important technology inflection points that were coming their way.  There are many popular examples that include Wang Computer (client based word processing), Digital Equipment (personal computer) among others.

The HPC industry is quite used to operating in the domain where the opportunity for failure is high.  It is the nature of pushing the boundaries of computing capability.  So what lesson might the commercial/enterprise data centers learn for the HPC community in this respect?  You must be willing to explore technologies outside the comfort zone defined by incremental or evolutionary improvements.  Customers have a long history of driving suppliers and service providers along predictable paths of incremental improvements.  

While this may be safe and meet the needs of the immediate business, following this safe path may lead to a missed opportunities afforded by new and emerging technologies.  In particular, low end disruptions enabled by new technologies can be detrimental to businesses that are caught off guard.  While the HPC industry is naturally focused on the high end of the computing spectrum and have a higher tolerance for risk, commercial/enterprise data centers must also take ownership for innovation and not assume it will come from their technology providers or through customer demands.

The problem of choosing the proper system for a given workload is not just an HPC issue. However, according to some, including Bill Dunmire, Senior Director of Product Marketing at SGI, “High performance computing is generally unchartered territory within enterprise data centers. It is here that “clusters” are utilized for HA (server failover) or server virtualization (e.g. V-motion) as opposed to parallel computing. Shared-memory systems are completely unknown.” He notes that in such cases, “IT will be required to develop expertise in HPC and will need to avoid inefficiencies in performance, scalability, and cost as LOB demands grow.”

Add to that general view, the more complex matters of system design and architecture which, as Jack Dongarra of Oak Ridge National Lab and the University of Tennessee told us, leads traditional HPC and enterprise users of advanced computing to two key questions—first, how can/should the internal architecture of HPC systems be changed to make them more suitable for data driven commercial applications? Second, how can/should external storage systems and their interfaces be adapted in order to efficiently orchestrate, as part of the overall workflow, the movement of data into and out of these systems? At this point in time, however, these questions seem to only generate more questions rather than any widely accepted (or even plausible) answers.

“Issues of interoperability are closely related with fundamental questions about the architecture and codesign of hardware and software infrastructure,” Dongarra explained. “Unfortunately, these same factors tend to make them relatively intractable. For interoperability has to mean more than just “everyone adopts the same standard or the same interface.” Aside from cases where de facto or de jure monopoly power is exercised, a viable approach to interoperability for infrastructure means designing protocols and interfaces that people voluntarily adopt because they can use them to achieve their functional goals while also achieving deployment scalability and sustainability over time.”

Echoing Jack Dongarra’s questions and potential roadblocks to widespread changes in enterprise computing, HPC researcher, Dr. Kirk Cameron of Virginia Tech explained that “The problems of scalability, speed, and complexity manifest acutely at the extreme scales that challenge the HPC community daily. Thus, the incessant need in HPC to maintain competitiveness by pushing simulation fidelity and scale to solve problems of grand importance to a myriad of sciences ensures the rapid adoption of cutting edge technologies.” He points to certain technologies, such as the Cell Broadband Engine, are vetted and then only briefly embraced by commercial enterprises. Other technologies, such as general purpose graphics processing units (GPGPUs), are vetted and ultimately adapted and integrated into the mainstream as evidenced by Intel and AMD embracing systems-on-chip technologies with GPGPUs built in. Much like high-performance car racing drives advances in automobile efficiency, HPC pushes the limits of computing so that commerical/enterprise datacenters can adopt best-in-class techniques and technologies to reduce the burden on their in-house R&D efforts.”

The central question is which technologies will enterprises seek and adopt that filter from HPC, especially with some of the potential barriers Dongarra and others have mentioned. To arrive at a more thorough answer to that question, we’ll be exploring a few aspects of these topics in coming special sections in the HPC to enterprise series around accelerators, HPC clouds and overall workflow/software issues later this week.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This