Big Data Rains Down on Seattle

By Nicole Hemsoth

October 20, 2011

The running theme for SC11 will be data-intensive science, with a large number of presentations and sessions focused on the problems and new developments spawned by “big data” and technical or scientific computing.

According to John Johnson, conference thrust chair and association division director at Pacific Northwest National Laboratory, “Data is a huge challenge in science today…the rapid advancements in data collection and generation are challenging traditional methods of storing, managing and analyzing the information.” He says that this year the supercomputing community is “being called upon to rise to the data challenge and develop methods for dealing with the exponential growth of data and strategies for analyzing and storing large data sets.”

With this theme in mind, we wanted to call your attention to some select sessions and special events at SC11 for those who are exploring data-intensive computing. As Johnson noted, the main issues are analysis, management and storage of large data sets, thus we’ll organize our “must see” elements for the show along those lines with the addition of visualization as another important topic for data-intensive scientific computing.

This year’s emphasis is in line with the announcement of the Graph 500 list, which will be presented on Tuesday afternoon. This will showcase the top of the line systems (according to the benchmark, anyway) for data-intensive computing applications. The organizers hope the session that follows will provide a discussion opportunity that will focus on evolution of the benchmark and the future of data-intensive science. The session can be found here and more information about this notable list can be found at the Graph 500 site.

In advance of the topical breakdown, however, it is worth mentioning that there is a thorough introduction to data intensive computing that runs for the first half of the morning on Monday. This session, presented by Robert Grossman from the University of Chicago and Collin Bennett from the Open Data Group will offer the “big picture” of data intensive computing by touching on utility clouds (Amazon) and data clouds, as provided by Hadoop. They will also provide an introduction to managing scientific datasets using distributed file systems like Hadoop and NoSQL databases like HBase. This will be in addition to parallel programming frameworks, including MapReduce, Hadoop steams and related techniques. It’s a lot to achieve in one short morning but the presents hope to illustrate the role of these and other tools for managing large datasets. If Monday morning is free and you want an initial big data deep dive, this is probably the best session early in the conference.

Analysis

The Second SC Workshop on Petascale Data Analytics: Challenges and Opportunities workshop, which runs all day on Monday will provide a dense overview on the growth of data intensive applications (and dataset sizes) and show how trends like cloud computing are becoming a way to handle the peak loads and large data demands of emerging applications. This workshop will be hosted by researchers from Oak Ridge National Lab and the University of Minnesota.

Another day-long workshop on Monday focused on data-intensive computing will be presented by Ian Taylor from Cardiff University and Johan Montagnat from CNRS. This event, which is the sixth Workshop on Workflows in Support of Large-Scale Science will focus on the “many facets of data-intensive workflow management systems, ranging from job execution to service management and the coordination of data, service and job dependencies.” The presenters hope to cover a range of related issues throughout the day, including data intensive workflows representation and enactment; designing workflow composition interfaces; workflow mapping techniques that may optimize the execution of the workflow; workflow enactment engines that need to deal with failures in the application and execution environment; and a number of computer science problems related to scientific workflows such as semantic technologies, compiler methods, fault detection and tolerance.

More analysis-related sessions of note include:

Semantic Graph Database Processing

Evaluating NoSQL for Enterprise Applications

Using Semantic Web Technologies on HPC Clouds

Management

There are a number of deeper, specialized sessions on management of big data, but a few do offer some promise for the non-specialist in terms of the tools that are the focus of the session. For instance, Monday’s “Big Data Means Your Metadata Must Work” touches on the range of tools uses to capture and use metadata using real-world examples. The presenters hope to provide attendees with a better sense of the many metadata tools that are available and how can be used to help share big data.

Other management-related sessions to note include:

Parallel Index and Query for Large Scale Data Analysis

Hadoop Acceleration Through Network Levitated Merge

Open source file systems – Transitioning from Petascale to Exascale

I/O Streaming Evaluation of Batch Queries for Data-Intensive Computational Turbulence

Storage

With estimates predicting that data growth will surpass Moore’s Law to 1.8 zettabytes by the end of this year and file-based data growing 75 times what it is now over the course of the next decade, the storage piece of the data-intensive computing puzzle is among the most important.

There are a number of presentations during the show, including one from Nick Kirsh of EMC/Isilon, called “Big Data, Big Opportunity: Maximizing the Value of Data in HPC Environments.” Kirsh plans to present “real-life implementations in which scale-out storage dramatically accelerated data and server performance, speeding time-to-results in critical HPC projects to extract maximum value from HPC data.” He also plans to address how implementing scale-storage can work toward eradicating the bottlenecks that HPC users with large datasets encounter.

Other storage-related sessions to note include:

The Sixth Parallel Data Storage Workshop (all-day event)

Terascala – Enabling Fast, Easy to Manage Storage Appliances

Visualization

One of the highlights this year will be the Scientific Visualization Showcase, which will demonstrate how visualization is being used to model everything from the beginning of the universe to jet engines. While this range of presentations is guaranteed to be great eye candy, there are a relatively large number of visualization presentations this year.

Visualization events outside of the showcase include a workshop on ultrascale visualization presented by Kwan Liu Ma from the University of California, Davis and Michael Papka from Argonne National Lab. The workshop, which runs from 9:00-5:30 on Sunday before the conference kickoff, will address new ways to become capable of exploiting petascale data to its fullest with exascale datasets on the horizon. The two will spend Sunday addressing “the latest and greatest research innovations in large data visualization and how these innovations impact scientific supercomputing and the discovery process.”

A number of the instructional sessions will touch on various elements of large-scale data analysis and visualization, including a tutorial for using the open source visualization and analysis application ParaView, which allows users to visualize large data sets in parallel. Outside of the more application-specific tutorial, the presenters plan to provide more general guidance about visualizing the massive simulations that run on supercomputers—and do a walk through of installation and set-up of ParaView.

Other visualization-related sessions to note include:

Large-Scale Data Visualization for Data-Intensive and High-Dimensional Scientific Data Analysis

World-highest Resolution Global Atmospheric Model and Its Performance on the Earth Simulator

These lists certainly don’t do justice to the wide range of sessions to choose from across the data-intensive computing spectrum and didn’t even begin to touch on the many sessions with clear HPC/big data cross-over appeal. Still, we look forward to see you all in Seattle this year—stop by our booth to share insights you’ve gleaned from these and other presentations, won’t you?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

Better Scientific Software: Turn Your Passion into Cash

September 13, 2019

Do you know your way around scientific software and programming? You think you can contribute to the community by making scientific software better? If so, then the Better Scientific Software (BSSW) organization wants yo Read more…

By Dan Olds

Google’s ML Compiler Initiative Advances

September 12, 2019

Machine learning models running on everything from cloud platforms to mobile phones are posing new challenges for developers faced with growing tool complexity. Google’s TensorFlow team unveiled an open-source machine Read more…

By George Leopold

AWS Solution Channel

A Guide to Discovering the Best AWS Instances and Configurations for Your HPC Workload

The flexibility and heterogeneity of HPC cloud services provide a welcome contrast to the constraints of on-premises HPC. Every HPC configuration is potentially accessible to any given workload in a well-resourced cloud HPC deployment, with vast scalability to spin up as much compute as that workload demands in any given moment. Read more…

HPE Extreme Performance Solutions

Intel FPGAs: More Than Just an Accelerator Card

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Building a Solid IA for Your AI

The journey to high performance precision medicine starts with designing and deploying a solid Information Architecture that addresses the spectrum of challenges from data and applications that need to be managed and orchestrated together to empower workloads from analytics to AI. Read more…

HPC Perspectives with Dr. Seid Koric

September 12, 2019

Brendan McGinty, director of Industry for the National Center for Supercomputing Applications (NCSA), University of Illinois at Urbana-Champaign, kicks off the first in a series of pieces profiling leaders in high performance computing (HPC), writing for the... Read more…

By Brendan McGinty

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

IDAS: ‘Automagic’ HPC With Training Wheels

September 12, 2019

High-performance computing (HPC) for research is notorious for having steep barriers to entry. For this reason, high-tech disciplines were early adopters, have Read more…

By Elizabeth Leake

Univa Brings Cloud Automation to Slurm Users with Navops Launch 2.0

September 11, 2019

Univa, the company behind Grid Engine, announced today its HPC cloud-automation platform NavOps Launch will support the popular open-source workload scheduler Slurm. With the release of NavOps Launch 2.0, “Slurm users will have access to the same cloud automation capabilities... Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

Eyes on the Prize: TACC’s Frontera Quickly Ramps up Science Agenda

September 9, 2019

Announced a year ago and officially launched a week ago, the Texas Advanced Computing Center’s Frontera – now the fastest academic supercomputer (~25 petefl Read more…

By John Russell

Quantum Roundup: IBM Goes to School, Delft Tackles Networking, Rigetti Updates

September 5, 2019

IBM today announced a new open source quantum ‘textbook’, a series of quantum education videos, and plans to expand its nascent quantum hackathon program. L Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Fastest Academic Supercomputer Enters Full Production at TACC, Just in Time for Hurricane Season

September 3, 2019

Frontera, the NSF supercomputer installed at the Texas Advanced Computing Center (TACC) in June, passed its formal acceptance last week and is now officially la Read more…

By Tiffany Trader

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This