NCSA Industry Conference Recap – Part Two

By Elizabeth Leake

November 26, 2019

Read part one of this conference recap here.

Industry Program Director Brendan McGinty welcomed guests to the second day of the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). One hundred seventy from 40 organizations attended the invitation-only event.

The program opened with the NCSA director’s address; William Gropp’s talk was titled, “Challenges and Opportunities in the Next Generation of HPC Systems.”

Gropp became NCSA’s fifth director in June 2017, and has served as NCSA’s chief scientist since 2015. He is co-principal investigator (PI) on the Blue Waters supercomputer, and PI of the National Science Foundation’s Midwest Big Data Hub. Gropp holds the Thomas M. Siebel Chair in Computer Science (CS) and leads an active research program in the UIUC CS department. Prior to joining NCSA, Gropp held appointments at Yale University, Argonne National Laboratory, University of Chicago and the Institute for Advanced Computing Applications and Technologies. He holds a Ph.D. in CS from Stanford University.Gropp opened his talk with a chart from a 2004 Scientific American magazine that illustrated microprocessor components between 1993 and where conventional wisdom of the time thought they would be in 2019. The chart plotted clock speed (gigahertz), transistors (millions) and gate length (nanometers). The graph was illustrating Moore’s Law; trends that, at the time, everyone believed would continue.

Moore’s Law predicted the number of transistors on a microchip would double every two years and the cost would be halved. But the expected trajectory began to flatten in 2005, which challenged Moore’s Law, and marked-end of architectural stability.

“Moore’s Law will fade; features will keep getting smaller, but it will take longer and longer to achieve each reduction in size,” said Gropp. “There won’t be a point (at least not for the next 10 years) when Moore’s Law will be dead (at the end of any improvement), but as it gets harder and harder to achieve the same factors of reduction in size, engineers and scientists will need to innovate other ways to continue to improve performance,” he added.

“This is when new architectures were needed to increase performance; this is when GPUs, highly-parallel, simpler cores and specialized elements entered the scene,” said Gropp. “Everyone has been pushing toward extreme-scale architectures that are becoming more heterogeneous,” he added. For example, Gropp noted that China’s next-generation systems present a diversity of accelerator choices. The U.S. Department of Energy’s Sierra system features 4320 nodes of IBM POWER9 CPUs, with NVIDIA Volta Graphics Processing Units (GPUs). NCSA’s Deep Learning System has 16 nodes of IBM POWER9, with 4 NVIDIA Volta GPUs, plus FPGAs (Field Programmable Gate Arrays).

Gropp noted that it isn’t only processors and platforms that have changed. Since 2005, storage prices have fallen. A proliferation of sensors created a new way to look at the world, but they have also presented challenges with the amount of data they create, and our networks must facilitate larger data transfers.

But the biggest problem, according to Gropp, was the end of software stability. Fortran has been around for more than 40 years; C and C++ about 25 years. “New architectures and big-data demands have seen the birth of many new programming languages that don’t seem to last as long,” he said, and added, “Note the rise of Python; Perl was the Python of its day.”

Among consequences is the lack of forward or backward compatibility. “Welcome to version hell!”

While there are advantages to rapid innovation, the disadvantages include difficulty in finding where versions and components intersect. “I consider this a failure of software engineering,” said Gropp.

The end of Dennard scaling made our algorithms imperfect; they no longer exploit the performance that’s possible. Dennard once ensured the future was predictable, “but that free ride is over,” he added.

What is NCSA doing to stay ahead of this problem?

“We’re building teams of complementary expertise,” said Gropp. Anchor projects, such as the Center for Artificial Intelligence (AI) Innovation, Center for Digital Agriculture and the Large Synoptic Survey Telescope (LSST), have drawn specialists with a wealth of skills and strengths. NCSA has a new software directorate that employs more than 30 developers with expertise in many program languages. This workforce is especially attractive to NCSA Industry partners who can subcontract skills needed for special projects (contracts range from .25 to 6 FTE’s and for six months up to years in length).

NCSA Center for AI Innovation

During the Center for Digital Agriculture kick-off and first day of the NCSA Industry conference, it was evident that the 40 organizations represented, including NCSA Industry partners and tech companies who sponsored the event, are either active or soon to be active with AI; and just about every sector was represented. Therefore, few were surprised to learn that NCSA is launching a new Center for AI Innovation. Co-founders include NCSA Industry Director Brendan McGinty, Research Scientist Eliu Huerta, and Senior Research Scientist Volodymyr Kindratenko. Focused on the needs of industry, research and scholarship, “the new center will serve as a single umbrella for all things AI,” said McGinty. He also mentioned that a special announcement would be made during SC19.

Rising to the demands of the communities they serve; the NCSA Center for AI has been two years in the making. Initially forming with an investment from the National Science Foundation and Department of Energy (DoE) physics grants; each totaling $1.8 million, many more disciplines, domains and sectors are represented, including multi-messenger astrophysics, high-energy physics, medicine, agriculture, and financial services, to name a few. UIUC faculty affiliates from Agriculture, Electrical and Computing Engineering, Physics, the Beckman Institute, and Carle Clinic will foster an interdisciplinary scholastic AI experience, and they’re partnering with other universities. The advisory board has representation from national laboratories and industry.

SPIN participants. Photo courtesy by NCSA.

The center’s education and outreach initiative is led by Kindratenko, who has found through experience with NCSA’s Students Pushing Innovation program, or “SPIN,” that many undergraduate scholars who enter the program are already acquainted with machine and deep learning.

To test and strengthen their skills, an AI hackathon was sponsored by NVIDIA and co-organized by the Center for AI, Gravity Group, Innovative Systems Lab and NCSA Industry immediately prior to the Industry Conference. To prepare for the hackathon, students completed IBM tutorials that acquainted them with the POWER9 platform; workshops continue every Wednesday from 3-5:00 p.m. at NCSA. “Two days, Three problems, Five teams, Twenty participants, and one dream,” said Huerta.

AI Hackathon winners are encouraged to apply for internships with companies at UIUC’s Research Park in the spring where they can help tackle real-world problems for research and industry partners. “This hands-on experience at the undergraduate level is critically important if we hope to answer the call for an AI-savvy digital workforce,” said Kindratenko.

The balance of day two included a session on emerging technologies chaired by NCSA Technical Program Manager Dan LaPine. Representatives from four tech giants presented case studies where AI methodologies support research:

  • Thomas Henson (Dell Office of the CTO, Data Engineering Advocate) presented, “Scaling AI Initiatives from POC to Large Scale Production Deployments.”
  • Mark Fernandez (HPE, Americas HPC Tech Officer & SpaceBorne Computer Payload Developer) talked about the first HPC system to be deployed into outer space: “The Dawn of HPC and AI Above the Clouds.”
  • Roger Goff (DDN BDM & Senior Solutions Engineer): “Making the Best User of Flash for HPC with Lustre.”
  • Tom Gibbs (NVIDIA Director of Developer Relations) presented a talk titled, “The Convergence of HPC and AI for Grand Challenge Science Problems.”

Part one of this conference recap is featured in HPCwire. A summary of the co-located NCSA Center for Digital Agriculture Conference was recently featured in Datanami.

Photos by Leake and NCSA. 

About the Author

HPCwire Contributing Editor Elizabeth Leake is a consultant, correspondent and advocate who serves the global high performance computing (HPC) and data science industries. In 2012, she founded STEM-Trek, a global, grassroots nonprofit organization that supports workforce development opportunities for science, technology, engineering and mathematics (STEM) scholars from underserved regions and underrepresented groups.

As a program director, Leake has mentored hundreds of early-career professionals who are breaking cultural barriers in an effort to accelerate scientific and engineering discoveries. Her multinational programs have specific themes that resonate with global stakeholders, such as food security data science, blockchain for social good, cybersecurity/risk mitigation, and more. As a conference blogger and communicator, her work drew recognition when STEM-Trek received the 2016 and 2017 HPCwire Editors’ Choice Awards for Workforce Diversity Leadership.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Officials, scientists and other stakeholders celebrated the new sy Read more…

By Staff report

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Indiana University Researchers Use Supercomputing to Model the State’s Largest Watershed

February 20, 2020

With water stressors on the rise, understanding and protecting water supplies is more important than ever. Now, a team of researchers from Indiana University has created a new climate change data portal to help Indianans Read more…

By Staff report

TACC – Supporting Portable, Reproducible, Computational Science with Containers

February 20, 2020

Researchers who use supercomputers for science typically don't limit themselves to one system. They move their projects to whatever resources are available, often using many different systems simultaneously, in their lab Read more…

By Aaron Dubrow

China Researchers Set Distance Record in Quantum Memory Entanglement

February 20, 2020

Efforts to develop the necessary capabilities for building a practical ‘quantum-based’ internet have been ongoing for years. One of the biggest challenges is being able to maintain and manage entanglement of remote q Read more…

By John Russell

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

New Algorithm Allows PCs to Challenge HPC in Weather Forecasting

February 19, 2020

Accurate weather forecasting has, by and large, been situated squarely in the domain of high-performance computing – just this week, the UK announced a nearly $1.6 billion investment in the world’s largest supercompu Read more…

By Oliver Peckham

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

February 19, 2020

Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity a Read more…

By John Russell

UK Announces £1.2 Billion Weather and Climate Supercomputer

February 19, 2020

While the planet is heating up, so is the race for global leadership in weather and climate computing. In a bombshell announcement, the UK government revealed p Read more…

By Oliver Peckham

The Massive GPU Cloudburst Experiment Plays a Smaller, More Productive Encore

February 13, 2020

In November, researchers at the San Diego Supercomputer Center (SDSC) and the IceCube Particle Astrophysics Center (WIPAC) set out to break the internet – or Read more…

By Oliver Peckham

Eni to Retake Industry HPC Crown with Launch of HPC5

February 12, 2020

With the launch of its Dell-built HPC5 system, Italian energy company Eni regains its position atop the industrial supercomputing leaderboard. At 52-petaflops p Read more…

By Tiffany Trader

Trump Budget Proposal Again Slashes Science Spending

February 11, 2020

President Donald Trump’s FY2021 U.S. Budget, submitted to Congress this week, again slashes science spending. It’s a $4.8 trillion statement of priorities, Read more…

By John Russell

Policy: Republicans Eye Bigger Science Budgets; NSF Celebrates 70th, Names Idea Machine Winners

February 5, 2020

It’s a busy week for science policy. Yesterday, the National Science Foundation announced winners of its 2026 Idea Machine contest seeking directions for futu Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This