HPC Roots Feed Big Data Branches

By Nicole Hemsoth

February 9, 2014

In this segment of our continuing “HPC Lessons for the Wider Enterprise World” series, we’ll take a look at one of the key movements that’s pushed HPC into the mainstream view—big data. Whether or not it’s an overplayed buzzword, the reality is, the phenomenon is driving new awareness of HPC in a growing set of commercial IT circles; pushing traditional HPC vendors into new enterprise territory; and helping the highest ends of both the commercial and research computing find a new golden era of new tools, frameworks and methodologies for tackling demanding data.

According to the most recent IDC figures, 67% of HPC shops say that they perform what can be categorized as big data analysis. These workloads, which the analyst firm dubs, “high performance data analysis” (HPDA) are expected to grow extensively, increasing from $743.8 million in 2012 to almost $1.4 billion in 2017. Additonally, the storage revenue for high performance data analysis on HPC systems will near almost a billion by 2017, IDC says.

IDC defines HPDA as data-intensive simulation and analysis, involving tasks with “sufficient data volumes and algorithmic complexity to require HPC resources.” This can include existing simulation or new analytical methods, and a variety of data types (structured, unstructured, both) or potentially the use of graph analytics or Hadoop frameworks, for example.

These are striking figures in their own right, but let’s consider the reverse of these numbers for a moment. While HPC might be adopting tools and techniques driven from the big data-laden enterprise (nebulous dividing lines exist terminology-wise when HPC/big data are separated into distinct classifications), this series is focused on the lessons about scalability, reliability, efficiency and extensibility that HPC can teach to the big data masses.

In our own informal opinion survey of experts across the HPC spectrum, a resounding majority saw simple parallels between HPC and commercial big data but noted key differences in terms how each camp thinks about hardware and software tools and resources as well as overall workflow. In sum, the HPC leaders we spoke with for the series saw ample opportunities for HPC technologies to filter out—not just in terms of raw technology, but also in the way of processes, methodologies and approaches to addressing large, complex data volumes that require reasonably good performance.

As Bill Kramer, Deputy Project Director for the Blue Waters project at the National Center for Supercomputing Applications (NCSA) echoed, “Today, we see data analysis and data use surpassing much of the performance capability of commodity interconnects and protocols. HPC has dealt with large scale data for many years, and many of the HPC-like technologies, properly adapted, have the potential to enable new and expanded investigations.”

“Some aspects of what we now call big data are certainly novel and innovative, but in many other corners, big data solutions currently are simply re-inventing the wheel—wheels that have been turning in classical HPC for years, if not decades.” Fritz Ferstl, CTO of Univa says. He points to workload management and distributed file systems as prime examples, noting that “even some of the parallel programming paradigms that are being employed in the big data space seem unnecessarily differentiated from what has been evolving and has matured in classical HPC over two decades.”

When we asked Jack Dongarra, Distinguished Professor at the University of Tennessee and lead at Oak Ridge National Lab about what lessons HPC has to offer the world of mainstream big data, he offered an answer as nuanced as both technology areas. He explained that while it is widely recognized that “big data” is has many meanings, this multiplicity of meanings isn’t necessarily a good thing. Part of the problem is that, like familiar alternatives, such as “data intensive,” what counts as big data is relative to other factors, and therefore changes depending on the perspective—processor, memory, bandwidth, storage—from which it is being viewed.

“Straightforward examples of big data applications in this sense are applications that take all of a supercomputer’s memory or more, or that are too complex to process because the relation between computation and data size is non-linear, or that have real-time processing requirements the velocity of which exceeds the I/O bandwidth,” said Dr. Dongarra.

“Generally speaking,” he said, “there are very few large-scale applications of practical importance that are not data intensive when looked at from some relevant point of view. When looking are application in the HPC space, whether the data comes from new instruments, from massive simulations, or from distributed sensors, deliver eye-glazing quantities of data at unprecedented rates. From an applications perspective, however, discussions of big data have greatly increased the prominence of ‘data-driven’ applications (such as data analytics, top-down queries and predictive modeling), where the operations are defined and propelled not only by large data volumes and data streams, but also by the complexity or heterogeneity of the data involved.”

Dongarra says that although researchers have been successful for some time in processing computer-generated, semi-structured data (big simulations) and structured observational data (big instruments), “they are now more eager to take on the challenges of high volumes of unstructured and heterogeneous observational data (text, images, medical records, etc.), which often come in massive piles of small units and are asynchronously generated. So in that way, big data is redefining the HPC application landscape.”

Rob Clyde, CEO of Adaptive Computing, reminds us that “all enterprises, not just Fortune 500 companies, are collecting and storing massive amounts of data, from social media for retailers to multi-dimensional seismic imaging in oil and gas and everything in between. However, the enterprise is struggling to extract better insights and leverage the data to make data-driven decisions. The process is very manual and time consuming with complex dependencies that need to manage multiple applications. The end result is overutilized siloed environments while others lay idle.

To get up to speed, says Clyde, the enterprise can take a play out of the traditional HPC playbook, which has been dealing with big data for a long time. “The requirements are similar to traditional HPC users; however, the players are different and more prolific as HPC hardware becomes more affordable, even for the mid-market.”

His opinions were validated by a recent survey his company produced. According to their findings, which were the collective ideas of over 400 data center managers, administrators and users in a number of verticals, data is primarily being analyzed by home-grown and highly customized applications. The survey also found that 83 percent believe big data analytics are important to their organization or department, but 90 percent would have greater satisfaction from a better analysis process and 84 percent have a manual process to analyze big data.

Based on their own internal survey, which took a look at the big picture across a number of verticals, “the enterprise severely limits its ability to achieve big data insights rapidly and cost-effectively because they do not recognize the differences between traditional IT workloads and big data workloads. Simply put, siloed environments with no workflow automation to process simulations and data analysis fall short in their ability to extract game-changing information from data. In line with our survey findings, we predict that more of the enterprise will adopt HPC to aid their big data efforts.”

Although Clyde and his team at Adaptive are focused on workload automation and large-scale management of workflows, their findings are worth noting as the “siloed environment” problem is dually encountered in both HPC and enterprise settings. While we’ll talk more about this when we move into our software segment of this special series, it’s worth noting that the complexity challenges extend far beyond the diversity and structure of the data—there is still a profound need for users to put the overall workflows into context of goals, current tools and applications, efficiency and beyond. HPC has been able to understand the finer points of doing this at scale, which means their views on adopting workflows to complex environments should not be overlooked by enterprise users seeking to streamline their big data analytics operations.

In essence, much of what Dongarra, Clyde, and others shared for this and other segments of this HPC-to-enterprise series revolves around the topic of workflow.  As Jack Dongarra noted, “In today’s society, the processing of digital information has become such a routine part of life that the general idea of creating digital workflows, in this generic sense, increasingly pervades even discussions of personal productivity in popular media.”

He argues that the concept of workflows will also dominate much of the thinking about cyberinfrastructure for all kinds of research in the era of data-driven science. “The problems inherent in working with data that are streaming out of instruments and simulations at peta- or exabyte rates, or of integrating and analyzing massive, multi-dimensional data sets, are simply too difficult for things to be otherwise. In terms of challenges to workflow, many domain sciences that produce and manage big data share common constraints.”

HPC and large-scale enterprise analyst Dan Olds, with Gabriel Consulting, reiterated some of these ideas, noting that enterprises are “experiencing an unprecedented expansion in the amount to data that’s available to them and potential uses for that data.” Olds says that while sifting through this data will give them insights into their business, along with potential competitive advantage that simply weren’t possible a few years ago, there’s no free lunch – finding the gold nuggets in the data avalanche requires planning, expertise, and investment in the right technologies.

According to Olds, “Business side analysts are going to demand the ability to sort through massive amounts of raw data in order to find, and test, relationships between disparate factors. For example:  How early in autumn will people start thinking about buying winter clothes? Does this vary by location, age, or family size? What’s the best way to get our winter coat-aplooza sale offer in front of the right buyers at the right time? Framing these questions is their job, gathering, storing, and providing the ability to process the data is the job of the data center. Satisfying the analytic demands of the business is causing a lot of sleepless nights for many a data center manager these days.”

“The problems arise from the scale of data and associated compute power needed to process it. Compounding the challenge is the need for speed – enterprise managers need answers to their questions so that they can make quick decisions on pricing, stock levels, and other important issues,” he continued. An answer that comes too late to take advantage of an opportunity is worthless.

The overriding theme in both enterprise and research HPC data analytics environments is to seek the “big fish” in the seas of data. As we take a look in our next segments in this series at enabling tools and approaches, including cloud computing, hardware acceleration, software methods and tools, and other aspects, the wealth of information about how to manage large, complex data from the HPC community will come into greater focus.

The introductory article in this multiple-part series appearing in February can be found here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Watch Nvidia’s GTC21 Keynote with Jensen Huang Livestreamed Here at HPCwire

April 9, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Argonne Supercomputing Supports Caterpillar Engine Design

April 8, 2021

Diesel fuels still account for nearly ten percent of all energy-related U.S. carbon emissions – most of them from heavy-duty vehicles like trucks and construction equipment. Energy efficiency is key to these machines, Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new training and inference servers that will power the upcoming Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

What’s New in HPC Research: Tundra, Fugaku, µHPC & More

April 6, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

RIKEN’s Ongoing COVID Research Includes New Vaccines, New Tests & More

April 6, 2021

RIKEN took the supercomputing world by storm last summer when it launched Fugaku – which became (and remains) the world’s most powerful supercomputer – ne Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

AI Systems Summit Keynote: Brace for System Level Heterogeneity Says de Supinski

April 1, 2021

Heterogeneous computing has quickly come to mean packing a couple of CPUs and one-or-many accelerators, mostly GPUs, onto the same node. Today, a one-such-node system has become the standard AI server offered by dozens of vendors. This is not to diminish the many advances... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire