HPC Roots Feed Big Data Branches

By Nicole Hemsoth

February 9, 2014

In this segment of our continuing “HPC Lessons for the Wider Enterprise World” series, we’ll take a look at one of the key movements that’s pushed HPC into the mainstream view—big data. Whether or not it’s an overplayed buzzword, the reality is, the phenomenon is driving new awareness of HPC in a growing set of commercial IT circles; pushing traditional HPC vendors into new enterprise territory; and helping the highest ends of both the commercial and research computing find a new golden era of new tools, frameworks and methodologies for tackling demanding data.

According to the most recent IDC figures, 67% of HPC shops say that they perform what can be categorized as big data analysis. These workloads, which the analyst firm dubs, “high performance data analysis” (HPDA) are expected to grow extensively, increasing from $743.8 million in 2012 to almost $1.4 billion in 2017. Additonally, the storage revenue for high performance data analysis on HPC systems will near almost a billion by 2017, IDC says.

IDC defines HPDA as data-intensive simulation and analysis, involving tasks with “sufficient data volumes and algorithmic complexity to require HPC resources.” This can include existing simulation or new analytical methods, and a variety of data types (structured, unstructured, both) or potentially the use of graph analytics or Hadoop frameworks, for example.

These are striking figures in their own right, but let’s consider the reverse of these numbers for a moment. While HPC might be adopting tools and techniques driven from the big data-laden enterprise (nebulous dividing lines exist terminology-wise when HPC/big data are separated into distinct classifications), this series is focused on the lessons about scalability, reliability, efficiency and extensibility that HPC can teach to the big data masses.

In our own informal opinion survey of experts across the HPC spectrum, a resounding majority saw simple parallels between HPC and commercial big data but noted key differences in terms how each camp thinks about hardware and software tools and resources as well as overall workflow. In sum, the HPC leaders we spoke with for the series saw ample opportunities for HPC technologies to filter out—not just in terms of raw technology, but also in the way of processes, methodologies and approaches to addressing large, complex data volumes that require reasonably good performance.

As Bill Kramer, Deputy Project Director for the Blue Waters project at the National Center for Supercomputing Applications (NCSA) echoed, “Today, we see data analysis and data use surpassing much of the performance capability of commodity interconnects and protocols. HPC has dealt with large scale data for many years, and many of the HPC-like technologies, properly adapted, have the potential to enable new and expanded investigations.”

“Some aspects of what we now call big data are certainly novel and innovative, but in many other corners, big data solutions currently are simply re-inventing the wheel—wheels that have been turning in classical HPC for years, if not decades.” Fritz Ferstl, CTO of Univa says. He points to workload management and distributed file systems as prime examples, noting that “even some of the parallel programming paradigms that are being employed in the big data space seem unnecessarily differentiated from what has been evolving and has matured in classical HPC over two decades.”

When we asked Jack Dongarra, Distinguished Professor at the University of Tennessee and lead at Oak Ridge National Lab about what lessons HPC has to offer the world of mainstream big data, he offered an answer as nuanced as both technology areas. He explained that while it is widely recognized that “big data” is has many meanings, this multiplicity of meanings isn’t necessarily a good thing. Part of the problem is that, like familiar alternatives, such as “data intensive,” what counts as big data is relative to other factors, and therefore changes depending on the perspective—processor, memory, bandwidth, storage—from which it is being viewed.

“Straightforward examples of big data applications in this sense are applications that take all of a supercomputer’s memory or more, or that are too complex to process because the relation between computation and data size is non-linear, or that have real-time processing requirements the velocity of which exceeds the I/O bandwidth,” said Dr. Dongarra.

“Generally speaking,” he said, “there are very few large-scale applications of practical importance that are not data intensive when looked at from some relevant point of view. When looking are application in the HPC space, whether the data comes from new instruments, from massive simulations, or from distributed sensors, deliver eye-glazing quantities of data at unprecedented rates. From an applications perspective, however, discussions of big data have greatly increased the prominence of ‘data-driven’ applications (such as data analytics, top-down queries and predictive modeling), where the operations are defined and propelled not only by large data volumes and data streams, but also by the complexity or heterogeneity of the data involved.”

Dongarra says that although researchers have been successful for some time in processing computer-generated, semi-structured data (big simulations) and structured observational data (big instruments), “they are now more eager to take on the challenges of high volumes of unstructured and heterogeneous observational data (text, images, medical records, etc.), which often come in massive piles of small units and are asynchronously generated. So in that way, big data is redefining the HPC application landscape.”

Rob Clyde, CEO of Adaptive Computing, reminds us that “all enterprises, not just Fortune 500 companies, are collecting and storing massive amounts of data, from social media for retailers to multi-dimensional seismic imaging in oil and gas and everything in between. However, the enterprise is struggling to extract better insights and leverage the data to make data-driven decisions. The process is very manual and time consuming with complex dependencies that need to manage multiple applications. The end result is overutilized siloed environments while others lay idle.

To get up to speed, says Clyde, the enterprise can take a play out of the traditional HPC playbook, which has been dealing with big data for a long time. “The requirements are similar to traditional HPC users; however, the players are different and more prolific as HPC hardware becomes more affordable, even for the mid-market.”

His opinions were validated by a recent survey his company produced. According to their findings, which were the collective ideas of over 400 data center managers, administrators and users in a number of verticals, data is primarily being analyzed by home-grown and highly customized applications. The survey also found that 83 percent believe big data analytics are important to their organization or department, but 90 percent would have greater satisfaction from a better analysis process and 84 percent have a manual process to analyze big data.

Based on their own internal survey, which took a look at the big picture across a number of verticals, “the enterprise severely limits its ability to achieve big data insights rapidly and cost-effectively because they do not recognize the differences between traditional IT workloads and big data workloads. Simply put, siloed environments with no workflow automation to process simulations and data analysis fall short in their ability to extract game-changing information from data. In line with our survey findings, we predict that more of the enterprise will adopt HPC to aid their big data efforts.”

Although Clyde and his team at Adaptive are focused on workload automation and large-scale management of workflows, their findings are worth noting as the “siloed environment” problem is dually encountered in both HPC and enterprise settings. While we’ll talk more about this when we move into our software segment of this special series, it’s worth noting that the complexity challenges extend far beyond the diversity and structure of the data—there is still a profound need for users to put the overall workflows into context of goals, current tools and applications, efficiency and beyond. HPC has been able to understand the finer points of doing this at scale, which means their views on adopting workflows to complex environments should not be overlooked by enterprise users seeking to streamline their big data analytics operations.

In essence, much of what Dongarra, Clyde, and others shared for this and other segments of this HPC-to-enterprise series revolves around the topic of workflow.  As Jack Dongarra noted, “In today’s society, the processing of digital information has become such a routine part of life that the general idea of creating digital workflows, in this generic sense, increasingly pervades even discussions of personal productivity in popular media.”

He argues that the concept of workflows will also dominate much of the thinking about cyberinfrastructure for all kinds of research in the era of data-driven science. “The problems inherent in working with data that are streaming out of instruments and simulations at peta- or exabyte rates, or of integrating and analyzing massive, multi-dimensional data sets, are simply too difficult for things to be otherwise. In terms of challenges to workflow, many domain sciences that produce and manage big data share common constraints.”

HPC and large-scale enterprise analyst Dan Olds, with Gabriel Consulting, reiterated some of these ideas, noting that enterprises are “experiencing an unprecedented expansion in the amount to data that’s available to them and potential uses for that data.” Olds says that while sifting through this data will give them insights into their business, along with potential competitive advantage that simply weren’t possible a few years ago, there’s no free lunch – finding the gold nuggets in the data avalanche requires planning, expertise, and investment in the right technologies.

According to Olds, “Business side analysts are going to demand the ability to sort through massive amounts of raw data in order to find, and test, relationships between disparate factors. For example:  How early in autumn will people start thinking about buying winter clothes? Does this vary by location, age, or family size? What’s the best way to get our winter coat-aplooza sale offer in front of the right buyers at the right time? Framing these questions is their job, gathering, storing, and providing the ability to process the data is the job of the data center. Satisfying the analytic demands of the business is causing a lot of sleepless nights for many a data center manager these days.”

“The problems arise from the scale of data and associated compute power needed to process it. Compounding the challenge is the need for speed – enterprise managers need answers to their questions so that they can make quick decisions on pricing, stock levels, and other important issues,” he continued. An answer that comes too late to take advantage of an opportunity is worthless.

The overriding theme in both enterprise and research HPC data analytics environments is to seek the “big fish” in the seas of data. As we take a look in our next segments in this series at enabling tools and approaches, including cloud computing, hardware acceleration, software methods and tools, and other aspects, the wealth of information about how to manage large, complex data from the HPC community will come into greater focus.

The introductory article in this multiple-part series appearing in February can be found here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This