HPC Roots Feed Big Data Branches

By Nicole Hemsoth

February 9, 2014

In this segment of our continuing “HPC Lessons for the Wider Enterprise World” series, we’ll take a look at one of the key movements that’s pushed HPC into the mainstream view—big data. Whether or not it’s an overplayed buzzword, the reality is, the phenomenon is driving new awareness of HPC in a growing set of commercial IT circles; pushing traditional HPC vendors into new enterprise territory; and helping the highest ends of both the commercial and research computing find a new golden era of new tools, frameworks and methodologies for tackling demanding data.

According to the most recent IDC figures, 67% of HPC shops say that they perform what can be categorized as big data analysis. These workloads, which the analyst firm dubs, “high performance data analysis” (HPDA) are expected to grow extensively, increasing from $743.8 million in 2012 to almost $1.4 billion in 2017. Additonally, the storage revenue for high performance data analysis on HPC systems will near almost a billion by 2017, IDC says.

IDC defines HPDA as data-intensive simulation and analysis, involving tasks with “sufficient data volumes and algorithmic complexity to require HPC resources.” This can include existing simulation or new analytical methods, and a variety of data types (structured, unstructured, both) or potentially the use of graph analytics or Hadoop frameworks, for example.

These are striking figures in their own right, but let’s consider the reverse of these numbers for a moment. While HPC might be adopting tools and techniques driven from the big data-laden enterprise (nebulous dividing lines exist terminology-wise when HPC/big data are separated into distinct classifications), this series is focused on the lessons about scalability, reliability, efficiency and extensibility that HPC can teach to the big data masses.

In our own informal opinion survey of experts across the HPC spectrum, a resounding majority saw simple parallels between HPC and commercial big data but noted key differences in terms how each camp thinks about hardware and software tools and resources as well as overall workflow. In sum, the HPC leaders we spoke with for the series saw ample opportunities for HPC technologies to filter out—not just in terms of raw technology, but also in the way of processes, methodologies and approaches to addressing large, complex data volumes that require reasonably good performance.

As Bill Kramer, Deputy Project Director for the Blue Waters project at the National Center for Supercomputing Applications (NCSA) echoed, “Today, we see data analysis and data use surpassing much of the performance capability of commodity interconnects and protocols. HPC has dealt with large scale data for many years, and many of the HPC-like technologies, properly adapted, have the potential to enable new and expanded investigations.”

“Some aspects of what we now call big data are certainly novel and innovative, but in many other corners, big data solutions currently are simply re-inventing the wheel—wheels that have been turning in classical HPC for years, if not decades.” Fritz Ferstl, CTO of Univa says. He points to workload management and distributed file systems as prime examples, noting that “even some of the parallel programming paradigms that are being employed in the big data space seem unnecessarily differentiated from what has been evolving and has matured in classical HPC over two decades.”

When we asked Jack Dongarra, Distinguished Professor at the University of Tennessee and lead at Oak Ridge National Lab about what lessons HPC has to offer the world of mainstream big data, he offered an answer as nuanced as both technology areas. He explained that while it is widely recognized that “big data” is has many meanings, this multiplicity of meanings isn’t necessarily a good thing. Part of the problem is that, like familiar alternatives, such as “data intensive,” what counts as big data is relative to other factors, and therefore changes depending on the perspective—processor, memory, bandwidth, storage—from which it is being viewed.

“Straightforward examples of big data applications in this sense are applications that take all of a supercomputer’s memory or more, or that are too complex to process because the relation between computation and data size is non-linear, or that have real-time processing requirements the velocity of which exceeds the I/O bandwidth,” said Dr. Dongarra.

“Generally speaking,” he said, “there are very few large-scale applications of practical importance that are not data intensive when looked at from some relevant point of view. When looking are application in the HPC space, whether the data comes from new instruments, from massive simulations, or from distributed sensors, deliver eye-glazing quantities of data at unprecedented rates. From an applications perspective, however, discussions of big data have greatly increased the prominence of ‘data-driven’ applications (such as data analytics, top-down queries and predictive modeling), where the operations are defined and propelled not only by large data volumes and data streams, but also by the complexity or heterogeneity of the data involved.”

Dongarra says that although researchers have been successful for some time in processing computer-generated, semi-structured data (big simulations) and structured observational data (big instruments), “they are now more eager to take on the challenges of high volumes of unstructured and heterogeneous observational data (text, images, medical records, etc.), which often come in massive piles of small units and are asynchronously generated. So in that way, big data is redefining the HPC application landscape.”

Rob Clyde, CEO of Adaptive Computing, reminds us that “all enterprises, not just Fortune 500 companies, are collecting and storing massive amounts of data, from social media for retailers to multi-dimensional seismic imaging in oil and gas and everything in between. However, the enterprise is struggling to extract better insights and leverage the data to make data-driven decisions. The process is very manual and time consuming with complex dependencies that need to manage multiple applications. The end result is overutilized siloed environments while others lay idle.

To get up to speed, says Clyde, the enterprise can take a play out of the traditional HPC playbook, which has been dealing with big data for a long time. “The requirements are similar to traditional HPC users; however, the players are different and more prolific as HPC hardware becomes more affordable, even for the mid-market.”

His opinions were validated by a recent survey his company produced. According to their findings, which were the collective ideas of over 400 data center managers, administrators and users in a number of verticals, data is primarily being analyzed by home-grown and highly customized applications. The survey also found that 83 percent believe big data analytics are important to their organization or department, but 90 percent would have greater satisfaction from a better analysis process and 84 percent have a manual process to analyze big data.

Based on their own internal survey, which took a look at the big picture across a number of verticals, “the enterprise severely limits its ability to achieve big data insights rapidly and cost-effectively because they do not recognize the differences between traditional IT workloads and big data workloads. Simply put, siloed environments with no workflow automation to process simulations and data analysis fall short in their ability to extract game-changing information from data. In line with our survey findings, we predict that more of the enterprise will adopt HPC to aid their big data efforts.”

Although Clyde and his team at Adaptive are focused on workload automation and large-scale management of workflows, their findings are worth noting as the “siloed environment” problem is dually encountered in both HPC and enterprise settings. While we’ll talk more about this when we move into our software segment of this special series, it’s worth noting that the complexity challenges extend far beyond the diversity and structure of the data—there is still a profound need for users to put the overall workflows into context of goals, current tools and applications, efficiency and beyond. HPC has been able to understand the finer points of doing this at scale, which means their views on adopting workflows to complex environments should not be overlooked by enterprise users seeking to streamline their big data analytics operations.

In essence, much of what Dongarra, Clyde, and others shared for this and other segments of this HPC-to-enterprise series revolves around the topic of workflow.  As Jack Dongarra noted, “In today’s society, the processing of digital information has become such a routine part of life that the general idea of creating digital workflows, in this generic sense, increasingly pervades even discussions of personal productivity in popular media.”

He argues that the concept of workflows will also dominate much of the thinking about cyberinfrastructure for all kinds of research in the era of data-driven science. “The problems inherent in working with data that are streaming out of instruments and simulations at peta- or exabyte rates, or of integrating and analyzing massive, multi-dimensional data sets, are simply too difficult for things to be otherwise. In terms of challenges to workflow, many domain sciences that produce and manage big data share common constraints.”

HPC and large-scale enterprise analyst Dan Olds, with Gabriel Consulting, reiterated some of these ideas, noting that enterprises are “experiencing an unprecedented expansion in the amount to data that’s available to them and potential uses for that data.” Olds says that while sifting through this data will give them insights into their business, along with potential competitive advantage that simply weren’t possible a few years ago, there’s no free lunch – finding the gold nuggets in the data avalanche requires planning, expertise, and investment in the right technologies.

According to Olds, “Business side analysts are going to demand the ability to sort through massive amounts of raw data in order to find, and test, relationships between disparate factors. For example:  How early in autumn will people start thinking about buying winter clothes? Does this vary by location, age, or family size? What’s the best way to get our winter coat-aplooza sale offer in front of the right buyers at the right time? Framing these questions is their job, gathering, storing, and providing the ability to process the data is the job of the data center. Satisfying the analytic demands of the business is causing a lot of sleepless nights for many a data center manager these days.”

“The problems arise from the scale of data and associated compute power needed to process it. Compounding the challenge is the need for speed – enterprise managers need answers to their questions so that they can make quick decisions on pricing, stock levels, and other important issues,” he continued. An answer that comes too late to take advantage of an opportunity is worthless.

The overriding theme in both enterprise and research HPC data analytics environments is to seek the “big fish” in the seas of data. As we take a look in our next segments in this series at enabling tools and approaches, including cloud computing, hardware acceleration, software methods and tools, and other aspects, the wealth of information about how to manage large, complex data from the HPC community will come into greater focus.

The introductory article in this multiple-part series appearing in February can be found here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel, Micro Up the Ante for Flash Memory

May 22, 2018

Chipmakers continue to gear designs toward AI and other demanding cloud workloads that take advantage of datacenter flash storage capacity. To that end, memory specialist Micron Technology Inc. began shipping compact sol Read more…

By Tiffany Trader

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combined peak computing capacity, the new systems will extend the a Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

ASC18: Tough Applications & Tough Luck

May 17, 2018

The applications at the ASC18 Student Cluster Competition were tough. Tougher than the $3.99 steak special at your local greasy spoon restaurant. The apps are so tough that even Chuck Norris backs away from them slowly. Read more…

By Dan Olds

Japan Meteorological Agency Takes Delivery of Pair of Crays

May 21, 2018

Cray has supplied two identical Cray XC50 supercomputers to the Japan Meteorological Agency (JMA) in northwestern Tokyo. Boasting more than 18 petaflops combine Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

Emerging Advanced Scale Tech Trends Focus of Annual Tabor Conference

May 9, 2018

At Tabor Communications' annual Advanced Scale Forum (ASF) held this week in Austin, the focus was on enterprise adoption of HPC-class technologies and high performance data analytics (HPDA). It’s a confab that brings together end users (CIOs, IT planners, department heads) and vendors and encourages... Read more…

By the Editorial Team

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Leading Solution Providers

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

  • arrow
  • Click Here for More Headlines
  • arrow
Share This