Affordable Big Data Computing

November 1, 2013

Big Data applications  – once limited to a few exotic disciplines – are steadily becoming the dominant feature of modern computing. In industry after industry advanced instruments and sensor technology are generating massive datasets. Numascale Image 1Consider just one example, next generation DNA sequencing (NGS). Annual NGS capacity now exceeds 13 quadrillion base pairs (the As, Ts, Gs, and Cs that make up a DNA sequence). Each base pair represents roughly 100bytes of data (raw, analyzed, and interpreted). Turning the swelling sea of genomic data into useful biomedical information is a classic Big Data challenge, one of many, that didn’t exist a decade ago.

This mainstreaming of Big Data is an important transformational moment in computation. Datasets in the 10-to-20 Terabytes (TB) range are increasingly common. New and advanced algorithms for memory-intensive applications in Oil & Gas (e.g. seismic data processing), finance (real-time trading), social media (database), and science (simulation and data analysis), to name but a few, are hard or impossible to run efficiently on commodity clusters.

The challenge is that traditional cluster computing based on distributed memory – which was so successful in bringing down the cost of high performance computing (HPC) – struggles when forced to run applications where memory requirements exceed the capacity of a single node. Increased interconnect latencies, longer and more complicated software development, inefficient system utilization, and additional administrative overhead are all adverse factors. Conversely, traditional mainframes running shared memory architecture and a single instance of the OS have always coped well with Big Data Crunching jobs.

“Any application requiring a large memory footprint can benefit from a shared memory computing environment,” says William W. Thigpen, Chief, Engineering Branch, NASA Advanced Supercomputing (NAS) Division. “We first became interested in shared memory to simplify the programming paradigm. So much of what you must do to run on a traditional system is pack up the messages and the data and account for what happens if those messages don’t get there successfully and things like that – there is a lot of error processing that occurs.”

“If you truly take advantage of the shared memory architecture you can throw away a lot of the code you have to develop to run on a more traditional system. I think we are going to see a lot more people looking at this type of environment,” Thigpen says. Not only is development eased, but throughput and accuracy are also improved, the latter by allowing execution of more computationally demanding algorithms.

Numascale’s Solution

Until now, the biggest obstacle to wider use of shared memory computing has been the high cost of mainframes and high-end ‘super-servers’. Given the ongoing proliferation of Big Data applications, a more efficient and cost-effective approach to shared memory computing is needed. Now has developed a technology, NumaConnect, which turns a collection of standard servers with separate memories and I/O into a unified system that delivers the functionality of high-end enterprise servers and mainframes at a fraction of the cost.

  • NumaConnect links commodity servers together to form a single unified system where all processors can coherently access and share all memory and I/O. The combined system runs a single instance of a standard operating system like Linux.
  • Systems based on NumaConnect support all classes of applications using shared memory or message passing through all popular high level programming models. System size can be scaled to 4k nodes where each node can contain multiple processors. Memory size is limited only by the 48-bit physical address range provided by the Opteron processors resulting in a record-breaking total system main memory of 256 TBytes. (For details of Numascale technology see )

The result is an affordable, shared memory computing option to tackle data-intensive applications. NumaConnect-based systems running with entire data sets in memory are “orders of magnitude faster than clusters or systems based on any form of existing mass- storage devices and will enable data analysis and decision support applications to be applied in new and innovative ways,” says Einar Rustad, Numascale CTO.

The big differentiator for NumaConnect compared to other high-speed interconnect technologies is the shared memory and cache coherency mechanisms. These features allow programs to access any memory location and any memory mapped I/O device in a multiprocessor system with high degree of efficiency. It provides scalable systems with a unified programming model that stays the same from the small multi-core machines used in laptops and desktops to the largest imaginable single system image machines that may contain thousands of processors and tens to hundreds of terabytes of main memory.

Early adopters are already demonstrating performance gains and costs savings. A good example is Statoil, the global energy company based in Norway. Processing seismic data requires massive amounts of floating point operations and is normally performed on clusters. Broadly speaking, this kind of processing is done by programs developed for a message-passing paradigm (MPI). Not all algorithms are suited for the message passing paradigm and the amount of code required is huge and the development process and debugging task are complex.

Shorten Time To Solution

“We have used development funds to create a foundation for a simpler programming model. The goal is to reduce the time it takes to implement new mathematical models for the computer,” says Knut Sebastian Tungland Chief Engineer IT, Statoil. To address this issue, Statoil has set up a joint research project with Numascale who has developed technology to interconnect multiple computers to form a single system with cache coherent shared memory. Statoil was able to run a preferred application to analyze large seismic datasets on a NumaConnect-enabled system – something that wasn’t practical on a traditional cluster because of the application’s access pattern to memory. Not only did use of the more rigorous application produce more accurate results, but the NumaConnect-based system completed the task more quickly.

A second example is deployment of a large NumaConnect-based system at the University of Oslo. In this instance, the effort is being funded by the EU project PRACE (Partnership for Advanced Computing in Europe) and includes a 72-node cluster of IBM x3755s. Some of the main applications planned in Oslo include bioscience and computational chemistry. The overall goal is to broadly enable Big Data computing at the university.

“We focus on providing our users with flexible computing resources including capabilities for handling very large datasets like those found in applications for next generation sequencing for life sciences” says Dr. Ole W. Saastad, Senior Analyst and HPC expert at USIT, the University of Oslo’s central IT resource department. “Our new system with NumaConnect contains 1728 processor cores and 4.6TBytes of memory. The system can be used as one single system or partitioned in smaller systems where each partition runs one instance of the OS. With proper Numa-awareness, applications with high bandwidth requirements will be able to utilize the combined bandwidth of all the memory controllers and still be able to share data with low latency access through the coherent shared memory.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Migration Tools Needed to Shift ML to Production

September 20, 2018

The confluence of accelerators like cloud GPUs along with the ability to handle data-rich HPC workloads will help push more machine learning projects into production, concludes a new study that also stresses the importan Read more…

By George Leopold

Kyoto University ACCMS Implements Fine-grained Power Management

September 19, 2018

Data center power management is a ubiquitous challenge and in few places is it more so than at Kyoto University Academic Center for Computing and Media Studies (ACCMS)) where power consumption limits were imposed followi Read more…

By Staff

What’s New in HPC Research: September (Part 1)

September 18, 2018

In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

A Crystal Ball for HPC

People are notoriously bad at predicting the future.  This very much includes experts. In the Forbes article “Why Most Predictions Are So Bad” Philip Tetlock discusses the largest and best-known test of the accuracy of expert predictions which show that any experts would do better if they make random guesses. Read more…

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantu Read more…

By John Russell

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17


AMD @ SC17


ASRock Rack @ SC17

ASRock Rack



DDN Storage @ SC17

DDN Storage

Huawei @ SC17


IBM @ SC17


IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17


Lenovo @ SC17


Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17


Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17


Tyan @ SC17


Univa @ SC17


Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This