The HPC to Enterprise Infrastructure Leap

By Nicole Hemsoth

February 24, 2014

As more companies feel the burdens of growing data demands in terms of volume and complexity—not to mention the need to derive results on such data quickly and efficiently—the chasm between what was once considered mainstream enterprise computing and “traditional” high performance computing is  is narrowing.

As we’ve addressed in other parts of this special series on lessons that HPC can carry into a growing array of enterprise application areas, including those that have a range of defined “big data” problems, this merging of HPC and commercial computing has been underway with increasing veracity over the last few years in particular—directly in line with momentum around the many data movement, ingestion and processing, memory, efficiency and other challenges enterprise users face.

While HPC has always had a foothold in key commercial segments (financial services, oil and gas, government, etc.) the technologies that were once reserved for these large-scale commercial areas are filtering down to a wider base of existing enterprise entities. It’s not uncommon lately (in the wake of the hubbub around big data) to hear about insurance companies, web retailers, content and media companies and others taking notice of HPC technologies in new ways.  Bill Mannel, General Manager of Compute Servers at SGI echoed this following a conversation about this HPC to enterprise leap, noting, “Key lessons that commercial and enterprise datacenters can take away from HPC is that infrastructure matters based upon your application, your data, and the quality of service expectations of customers.”

While many won’t disagree with that point, for those with complex applications, infrastructure has to matter in different ways than it used to. As Cray’s VP of Storage and Data Management, Barry Bolding told us, one of the most important lessons for the commercial segments is productive scalability. “The commercial/enterprise space understands productive virtualization, which is a type of scaling that improves utilization of resources. The area of productive scaling that HPC brings to the table is efficient, productive scalability for complex systems.  Scaling to fit an HPC solution in the coming years will require efficient parallel computing (both HW and SW), efficient parallel storage (to ensure no data access bottlenecks) and scalable analytics.

Bolding says the enterprise is seeing more and more applications needs that fit this model of parallel compute, storage and analytics.  The energy sector is using new, complex algorithms to do oil and gas exploration and productive scalability is key to meeting their needs.  In this example parallel, scalable storage and compute are the core to solving the problems efficiently.

Another key lesson that HPC can bring to bear is adaptive technologies, he says, noting that “for maximum efficiency and TCO it is critical to match the application need to the appropriate underlying technology. This is contrary to the cloud model where little effort is made to match the underlying technology to the application.”

When asked about the infrastructure leap from HPC to enterprise, Paul Dlugosch of Micron explained that “It is the HPC industry that first meets the most critical and difficult problems encountered in scientific and technical computing and it is true that innovations in the HPC industry often trickle down into mainstream use in commercial/enterprise datacenters.” In some cases, he says, the innovations can migrate all the way down to the client or consumer space.” In short, although the HPC industry operates at the top of this hierarchy of compute capability, there are “lessons learned in the HPC industry that have practical application throughout the entire spectrum of compute capability.”

While performance remains an important metric, Dlugosch says a myopic focus on performance can lead towards the top of a pyramid where the performance crown may be acquired but the overall market for the technology developed might becomes proportionately smaller. “When performance is the only objective, important opportunities may be missed. A good example would be the disruption imparted on high performance microprocessor vendors by the emerging need for lower power processors where less compute performance was an acceptable trade off. The lesson here, of course, is that focus on high performance may miss very important innovations that are not based on processing performance.”

Performance does indeed drive all aspects of the computing industry, but a sole focus on compute performance can leave a business vulnerable, argues Dlugosch. While the HPC industry can better afford a concentrated focus on compute performance, this does not extend to other segments of the computing industry where performance is only one of several metrics that will determine overall success.

One other area where HPC and enterprise users can connect is in the realm of risk adversity, says Dlugosch. As he explained in a detailed interview:

The old adage that ‘nobody ever got fired for buying IBM’ reflects this point quite well.  Of course, IBM in this case is a proxy for any well established, mature and stable technology provider.  While it may be true that nobody gets fired for buying tried and true technology, entire businesses can fail because they did not recognize important technology inflection points that were coming their way.  There are many popular examples that include Wang Computer (client based word processing), Digital Equipment (personal computer) among others.

The HPC industry is quite used to operating in the domain where the opportunity for failure is high.  It is the nature of pushing the boundaries of computing capability.  So what lesson might the commercial/enterprise data centers learn for the HPC community in this respect?  You must be willing to explore technologies outside the comfort zone defined by incremental or evolutionary improvements.  Customers have a long history of driving suppliers and service providers along predictable paths of incremental improvements.  

While this may be safe and meet the needs of the immediate business, following this safe path may lead to a missed opportunities afforded by new and emerging technologies.  In particular, low end disruptions enabled by new technologies can be detrimental to businesses that are caught off guard.  While the HPC industry is naturally focused on the high end of the computing spectrum and have a higher tolerance for risk, commercial/enterprise data centers must also take ownership for innovation and not assume it will come from their technology providers or through customer demands.

The problem of choosing the proper system for a given workload is not just an HPC issue. However, according to some, including Bill Dunmire, Senior Director of Product Marketing at SGI, “High performance computing is generally unchartered territory within enterprise data centers. It is here that “clusters” are utilized for HA (server failover) or server virtualization (e.g. V-motion) as opposed to parallel computing. Shared-memory systems are completely unknown.” He notes that in such cases, “IT will be required to develop expertise in HPC and will need to avoid inefficiencies in performance, scalability, and cost as LOB demands grow.”

Add to that general view, the more complex matters of system design and architecture which, as Jack Dongarra of Oak Ridge National Lab and the University of Tennessee told us, leads traditional HPC and enterprise users of advanced computing to two key questions—first, how can/should the internal architecture of HPC systems be changed to make them more suitable for data driven commercial applications? Second, how can/should external storage systems and their interfaces be adapted in order to efficiently orchestrate, as part of the overall workflow, the movement of data into and out of these systems? At this point in time, however, these questions seem to only generate more questions rather than any widely accepted (or even plausible) answers.

“Issues of interoperability are closely related with fundamental questions about the architecture and codesign of hardware and software infrastructure,” Dongarra explained. “Unfortunately, these same factors tend to make them relatively intractable. For interoperability has to mean more than just “everyone adopts the same standard or the same interface.” Aside from cases where de facto or de jure monopoly power is exercised, a viable approach to interoperability for infrastructure means designing protocols and interfaces that people voluntarily adopt because they can use them to achieve their functional goals while also achieving deployment scalability and sustainability over time.”

Echoing Jack Dongarra’s questions and potential roadblocks to widespread changes in enterprise computing, HPC researcher, Dr. Kirk Cameron of Virginia Tech explained that “The problems of scalability, speed, and complexity manifest acutely at the extreme scales that challenge the HPC community daily. Thus, the incessant need in HPC to maintain competitiveness by pushing simulation fidelity and scale to solve problems of grand importance to a myriad of sciences ensures the rapid adoption of cutting edge technologies.” He points to certain technologies, such as the Cell Broadband Engine, are vetted and then only briefly embraced by commercial enterprises. Other technologies, such as general purpose graphics processing units (GPGPUs), are vetted and ultimately adapted and integrated into the mainstream as evidenced by Intel and AMD embracing systems-on-chip technologies with GPGPUs built in. Much like high-performance car racing drives advances in automobile efficiency, HPC pushes the limits of computing so that commerical/enterprise datacenters can adopt best-in-class techniques and technologies to reduce the burden on their in-house R&D efforts.”

The central question is which technologies will enterprises seek and adopt that filter from HPC, especially with some of the potential barriers Dongarra and others have mentioned. To arrive at a more thorough answer to that question, we’ll be exploring a few aspects of these topics in coming special sections in the HPC to enterprise series around accelerators, HPC clouds and overall workflow/software issues later this week.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

NSF Seeks Input on Cyberinfrastructure Advances Needed

January 12, 2017

In cased you missed it, the National Science Foundation posted a “Dear Colleague Letter” (DCL) late last week seeking input on needs for the next generation of cyberinfrastructure to support science and engineering. Read more…

By John Russell

NSF Approves Bridges Phase 2 Upgrade for Broader Research Use

January 12, 2017

The recently completed phase 2 upgrade of the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC) has been approved by the National Science Foundation (NSF) making it now available for research allocations to the national scientific community, according to an announcement posted this week on the XSEDE web site. Read more…

By John Russell

Clemson Software Optimizes Big Data Transfers

January 11, 2017

Data-intensive science is not a new phenomenon as the high-energy physics and astrophysics communities can certainly attest, but today more and more scientists are facing steep data and throughput challenges fueled by soaring data volumes and the demands of global-scale collaboration. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Fast Rewind: 2016 Was a Wild Ride for HPC

December 23, 2016

Some years quietly sneak by – 2016 not so much. It’s safe to say there are always forces reshaping the HPC landscape but this year’s bunch seemed like a noisy lot. Among the noisemakers: TaihuLight, DGX-1/Pascal, Dell EMC & HPE-SGI et al., KNL to market, OPA-IB chest thumping, Fujitsu-ARM, new U.S. President-elect, BREXIT, JR’s Intel Exit, Exascale (whatever that means now), NCSA@30, whither NSCI, Deep Learning mania, HPC identity crisis…You get the picture. Read more…

By John Russell

AWI Uses New Cray Cluster for Earth Sciences and Bioinformatics

December 22, 2016

The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), headquartered in Bremerhaven, Germany, is one of the country's premier research institutes within the Helmholtz Association of German Research Centres, and is an internationally respected center of expertise for polar and marine research. In November 2015, AWI awarded Cray a contract to install a cluster supercomputer that would help the institute accelerate time to discovery. Now the effort is starting to pay off. Read more…

By Linda Barney

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

New Genomics Pipeline Combines AWS, Local HPC, and Supercomputing

September 22, 2016

Declining DNA sequencing costs and the rush to do whole genome sequencing (WGS) of large cohort populations – think 5000 subjects now, but many more thousands soon – presents a formidable computational challenge to researchers attempting to make sense of large cohort datasets. Read more…

By John Russell

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This