Japan’s Extreme Scale Computing Expert Talks Big Data

By Nages Sieslack

May 5, 2014

The International Supercomputing Conference (ISC’14) has invited one of Japan’s leading HPC experts, Professor Satoshi Matsuoka to deliver a keynote titled “If You Can’t Beat Them, Lead Them – Convergence of Supercomputing and Next Generation ‘Extreme’ Big Data,”

In this thought-provoking talk on Tuesday, June 24, Matsuoka will share why he believes that supercomputer architectures will converge with those of big data and serve a crucial technological role for the industry. His assertion will be exemplified with a number of recent Japanese research projects in this area, including the JST-CREST “Extreme Big Data” project. To understand more about these projects and where they fit into the larger scope of extreme scale computing, we spoke with Matsuoka.

Is there a distinction between “data” and “big data?”

Satoshi Matsuoka: Of course. In fact, I categorize “simple data”, “big data” and “extreme big data” as three different domains.

“Big data” implies two principle characteristics. One is from semantic perspective, in that large data sets are collected in a rather unbiased fashion; and then one would try to extract some meaningful correlative information out of them, using various methods such as data mining, deep learning, graph analytics, etc. Another is from a system perspective, in that the data volume, bandwidth, etc., are too large to be processed with conventional machines, even those geared for traditional databases. The system components, both hardware and software, need enhancements in order to support the increased level of processing. In this sense, big data’s “super data processing” is to normal data processing as supercomputing is to normal computing.

By extreme big data we mean that the data volumes, as well as the computational needs, become so big that a simple extension of conventional big data processing architectures would no longer be feasible and will require convergence with supercomputing technologies and platforms.

How is big data relevant to the HPC space and how has the term evolved over time?  Is it something different than what used to be called “data-intensive computing?”

Matsuoka: In some sense HPC has been the pioneer of big data from the days of data-intensive computing. Even as far back as 20 years ago, researchers running climate codes were starting to struggle with terabytes of data when the general public was still in the gigabyte days.

By all means, the general area now covered by big data is much wider. Also due to the emergence of new application areas such as genomics, data-intensive computing in HPC has broadened to the extent that its own coverage has expanded.

How do you envision the convergence between big data and HPC to happen?

Matsuoka: What is unique in the current big data trend is the stress on various data analytics algorithm, such as deep learning and graph analytics. This, coupled with various other factors are requiring some changes to the HPC hardware and software stack, such as the need for a massive increase in data capacity and bandwidth. By contrast traditional HPC is trending toward high bandwidth but low memory capacity.

But since HPC also suffers from lack of memory capacity, the convergence at the hardware level will mostly be in the area of designing capacity-friendly deep memory hierarchies. This applies both to memory depth within a node, using new memory technologies and associated processor architectures,  as well as memory width across nodes, requiring extensive use of optics to support high bandwidth and low latency.

From the data side, the needs will be driven by the so-called “broken silos.” Data stored across multiple institutions and disciplines, as well as the proliferation of the internet of things, will cause the data capacities and the compute from the cross-correlations to simply explode. We now have big data applications in genomics that run on almost the entire K-computer, using the abundance of its one-petabyte memory and 660,000 cores. That is already about 1/5 to 1/7 the entire capacity of Amazon according to a major IT consulting company’s estimate. Think of the exascale era when we will have big data apps that require 100 million cores, a number that makes even Google miniscule by comparison.

Right now we have the enterprise with their own application use cases for big data, and perhaps even their own understanding of what the term means. With that in mind, how will a convergence of HPC and big data affect those users?

Matsuoka: Industry also adopts HPC but considers those applications distinct from mainstream computing. By their convergence enterprise and HPC users will learn to better exploit the combined technologies and also allow for the emergence of new applications that tie massive compute to data analytics. We already see examples now in areas such as genomics and design engineering.

Can you please elaborate on Japan’s role in advancing big data technologies and driving its convergence with HPC?

Matsuoka: For Japan, both HPC and big data are high on the agenda for research as well as the industry. It is prudent that we work with other regions of the world with similar vision to push both envelopes. The proposed HPC projects in Japan towards exascale will likely have increased emphasis on extreme big data as well.

ISC14_ml_1Now in its 29th year, ISC is the world’s oldest and Europe’s most important conference and networking event for the HPC community, offering a strong five-day technical program focusing on HPC technological development, and its application in scientific fields as well as its adoption in an industrial environment.

Over 300 hand-picked expert speakers and 170 exhibitors, consisting of leading research centers and vendors, will greet this year’s attendees to ISC. A number of events complement the technical program including Tutorials, the TOP500 Announcement, Research Paper Sessions, Birds of a Feather (BoF) Sessions, the Research Poster Session, Exhibitor Forums, and Workshops. For more, visit www.isc14.org.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National L Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blue Ribbon and Harley Davidson motorcycles the agenda addresse Read more…

By Merle Giles

NSF Awards $10M to Extend Chameleon Cloud Testbed Project

September 19, 2017

The National Science Foundation has awarded a second phase, $10 million grant to the Chameleon cloud computing testbed project led by University of Chicago with partners at the Texas Advanced Computing Center (TACC), Ren Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

NERSC Simulations Shed Light on Fusion Reaction Turbulence

September 19, 2017

Understanding fusion reactions in detail – particularly plasma turbulence – is critical to the effort to bring fusion power to reality. Recent work including roughly 70 million hours of compute time at the National E Read more…

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is s Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakt Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work... Read more…

By Jan Rowell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

MIT-IBM Watson AI Lab Targets Algorithms, AI Physics

September 7, 2017

Investment continues to flow into artificial intelligence research, especially in key areas such as AI algorithms that promise to move the technology from speci Read more…

By George Leopold

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Leading Solution Providers

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

  • arrow
  • Click Here for More Headlines
  • arrow
Share This