SC13 Wrapup: Supercomputing’s Top Themes

By Nicole Hemsoth

November 24, 2013

For those of us who traveled to Denver for SC13, it’s now back to “normal” as the year in high performance computing begins its slow descent into relative silence before a fresh start in 2014.

Sitting down to plow through the plethora of new items to pluck for a top announcements article seemed impossible without first discussing some of the broader trends and themes—they beg to be heard. The hard news breakdown can be found here, but context is everything during a time of flux and each one of our newsy picks embodies at least one of these.

Outside of some of the vendors and organizations who had a great showing last week, there are a few topics and specific machines worth mentioning as topical “best in show” picks.  Forgive the rare “personal pronouning” I’m about to do for once, but with so many great conversations with you all last week, it’s hard to leave those experiences out.

For now, we shall begin this thematic breakdown with the topic that you expected…but with some (possibly) unexpected details about its relative weight during the show…

Exascale

Let me guess…you probably saw this topic at the top of the list and said, “well, of course”… While this might not be a surprise because of its meaning for the HPC community (in terms of research and commercially-driven technology development, funding drive, and competitive appeal), in some ways this topic wasn’t the star of the show.

smnetworkLet’s just be honest here. Ever since China topped the Top 500 charts with what some in the U.S. are calling its “insurmountably” high performance system, the momentum and excitement around the “race” seems to have cooled. It’s hard to get excited about a dash to a finish line when there are thousands of yards between the runners.

But it’s just a matter of timing and technology refreshes, say many. The introduction of some innovative processor, memory and interconnect technologies, especially around 2015, are set to breathe new life into the race, spawning a new set of runners and adding some major ripples into what appears, for now anyway, to be very still waters. In the meantime, it’s slow and steady toward the goal.

This topic of exascale on the U.S. front was not without its own news announcement, however. Early in the week we broke word of a new investment in exascale technologies, this time from the Department of Energy’s Office of Science and the National Nuclear Security Administration (NNSA). The organization awarded $25.4 million in R&D contracts to “accelerate the development of next-generation supercomputers.”

This new funding effort rests under the DoE’s “DesignForward” initiative, which is a follow-on to the wider exascale ambitions put forth by the FastForward project. As one might imagine, it involves a number of the “usual suspects” for this sort of project. AMD, Cray, IBM, Intel’s federal division, and NVIDIA are all going to “work to advance extreme-scale, on the path to exascale, computing technology that is vital to national security, scientific research, energy security and the nation’s economic competitiveness.”

The emphasis of the DesignForward contracts is on the development of interconnect technologies that are architected with energy efficiency, high bandwidth and I/O capabilities. According to project leaders, “Under the new contract, Intel will focus on interconnect architectures and implementation approaches, Cray on open network protocol standards, AMD on interconnect architectures and associated execution models, IBM on energy-efficient interconnect architectures and messaging models and NVIDIA on interconnect architectures for massively threaded processors.” They note that, “The vendors will collaborate with DOE’s Exascale Co-design Centers to determine how changes in the system architectures will affect how well the scientific applications perform.”

Notice the lack of urgency in the language there… “working to advance”… “on the path to exascale”… but after all, it’s the thought (and money) that counts, right? And there are many who are counting. Counting down to the reality, counting up the number of government dollars that have been pushed toward the efforts, and counting on the fact that the investments will be returned to the public following the sustained focus on supercomputing—some are even counting by twos to keep up with the continued push-back on the projected year.

Interestingly, the technical program’s emphasis on exascale shared the stage with a few other topics of more contemporary appeal, most notably Hadoop (more on that in a moment). Still, the challenges on the energy, programming, reliability and other fronts were explored in great detail by a number of key presenters and served as the topical backdrop for many of the larger conversations and innovations.

hadoopelephantHadoop and Big Data

Let’s all agree that these are not the same thing, even if they are generally lumped into the same conversations.

In fact, this week the resounding sentiment I picked up from numerous non-vendor conversations is that HPC has always been about data and yes, that data has always been big.

While many seem to feel that the attention around big data is driven by the vendor and commercial user communities, there’s no doubt that the tooling—both on the systems and software fronts—are definitely worth the attention this community is starting to pay to it. And shouldn’t the big data folks be looking here too, because after all…

If your definition of big data revolves around complex datasets (structured versus unstructured), or of data use that needs to think beyond (or even before) MPI, or if there’s just plain too much of it and a way to manage/store it (off to tape, in memory, in a cloud somewhere) is a challenge, there was likely a lot at SC13 for you. Again, it’s not just about the Hadoopery that so often serves as focal point.  We will hit on a few of the specific announcements around “big data” in the news edition of our SC13 wrapup, but it’s fair to say that every vendor had a story—and often a solid one—about how to manage massive, complex datasets.

With that said, aside from the larger trend of categorizing “big data” as a natural part of HPC (or the reverse, depending on who you ask), Hadoop and MapReduce were at the core of almost as many sessions that emphasized exascale in the session title or description. Further, many vendors saved their key announcements for the supercomputing show, even if the audience was tuned for a wider world of technology users. Intel expanded on its Hadoop distro in detail, Cray and others emphasized the role their boxes play for Hadoop workloads with customized hooks, all the storage vendors danced a strange little dance with the topic (when they weren’t busy spinning Lustre around), and Adaptive Computing and others made announcements around how their tech can play nice with the tech world’s biggest buzzword since “big data” itself hit the show floors. It is dizzying, isn’t it?

Actually, some of the most compelling of those “big data” stories were from those you might not expect (or hear as much from). This is especially true on the “orchestration” and management front. Traditional workload management software, for example, is doing double-duty (and managing to double its reach for the first time outside of “pure” HPC and into the enterprise) by being robust enough to scale to cope with some dramatic data demands. We talked at length with Univa, Adaptive Computing, and even a smaller company from France, SysFera, about what they’re doing at the orchestration level to make management of complex data more practical for both scientific and commercial environments. Again, more on that during our news recap.

locationHPC: It’s Not Just Academia Anymore..

To this you could probably argue that it never was, depending on your perspective and current place of employment on the academia/commercial spectrum. But this year, perhaps (far) more than ever, most of what we were hearing from those who are “traditional” HPC vendors is that there is an ever-increasing demand for their goods and services outside of the expected quarters.

The concept of “productizing HPC” is really taking off and there are a few vendors who seem to manage this split very well while others struggle wrapping their unique technologies around a message that kicks a much broader appeal. But let’s face it—now, more than ever, companies with large-scale infrastructure concerns (and that’s almost anyone whose business success hinges on adept data wrangling) are looking to tried and true technologies that are proven at massive scale. And who are they going to learn this from? HPC.

From the largest systems on earth, the most robust software to manage all that iron and the breed of applications, tools and support ecosystems that have been purpose-built and designed to run at mind-boggling core count (and throw in a dash of acceleration) is finally sounding its wake up call to the rest of the world. The era of broader application of the technologies all of you folks are developing have a home…look around. And let me assure you, this isn’t a shameless plug when I tell you, but HPCwire spun out a new publication this year called EnterpriseTech exactly because of these reasons. HPC is growing up and out—we don’t see a need to divide the community into two pieces (scientific vs. commercial) but the expansion of supercomputing technologies into the types of mainstream large-scale environments is happening fast and deserved a more focused outlet that directs its attention to the wider world of these technologies you folks are developing, refining and leading as they trickle down the enterprise ladder. It’s cool. Plain and simple.

We handed out a couple of Editor’s Choice Awards this year simply due to companies’ unique ability to expand some traditional supercomputing technologies into far wider markets. Notable winners there include Cray (which has captured some compelling enterprise customers and managed to take its messaging as a “supercomputing company” into a bigger plain by listening to the market), SGI (which has managed to fine-tune a message and product line that balances supercomputing/HPC with a much wider commercial appeal), and Univa (which boasts massive commercial growth of a technology based on HPC efforts via GridEngine). We watched as other companies, including Penguin Computing, tweaked its offerings by listening to what’s going on at the hyperscale/large-scale shops that are asking for Open Compute designs that are backed by the perceived reliability of a company that’s built large-scale systems. IBM and NVDIA hooked up in an effort to expand GPU computing to a wider group of potential users. Even tape storage vendors, especially SpectraLogic, have found new life in catering to an expanding array of commercial needs with new tooling. It’s fun to watch, isn’t it?

This is certainly not to say that at SC13 and those ahead scientific computing won’t take the topical cake. But this is to say that these tools are going to see an explosion of interest, adoption and hell, for that matter, press from the wider world of technology. HPC has arrived.

So with so much momentum, potential and exploration possible, this begs another question entirely—one that is its own “top topical pick” from the show…

Where Are All The Startups in HPC?

Seriously.

Each day, the news feed here at HPCwire HQ is flooded with “big data” vendor announcements of x-million dollars in series A funding for your typical, often rather vague and difficult-to-determine competitive angle-based tooling. More database vendors than one can shake a stick at. And why? Because “big data” is sexy. Don’t ask me why, but in a very all-encompassing, hopelessly generalized, technologically fleeting sort of way, it just is.

The real question for you many innovators out there is how do we bring the sexy back? To HPC, that is, because there was a day when this was all very fancy and special and, yes, sexy.

Dazzling scientific simulations? Yep, we have those. Dramatic feats of massive scale? Check. Theoretical technologies being developed in stealth mode. Ab-so-lutely! .. So where is the missing link? We’re going to be exploring that throughout 2014. Every hyper-hyped technology lately got its start because it scaled, because it was big, and because it powered the unfathomable. You, holy halls of scientific computing at the national lab scale, have something to learn from them, they’ll say. But they are well aware that you have a great deal to offer. MPI, Lustre, GPU computing—these are filtering in, trickling down from supercomputing mountain. Look out, world!

As the wider vendor and user world wakes up to the fact that the HPC community has been doing the truly awe-inspiring work before the Hadoop elephant was ever stuffed and it’s always been about “big data” on this side of the fence, we’re going to be here to catch that news and push it out. HPC needs investment. These technologies are the only thing proven at large scale. This is our year—send me your stories, your stealth mode progress, your ideas, your vision—and let’s share HPC with the rest of the world. I have a feeling that none of us have ever been the “cool kids” (sorry if that’s inaccurate, but I know a lot of you—ha!), but this is our chance to take over the technology lunchroom. Know what I mean?

Forward-Looking Processors/Accelerators

If you stay tuned tomorrow for the announcement/news based SC13 wrapup we’ll shed more light on the processor and accelerator new picks front, but suffice to say, there were some great “looking ahead” announcements from some surprise vendors, including Convey Computing and Micron.

We sat down for a close-knit briefing with Intel to discuss some of the specifics of the Knight’s Landing chip, which has the potential to shake up the HPC processing ecosystem, watched NVIDIA roll out more power with its K40, and as noted above, drew in our breaths at some of the neat ideas coming from new processor outliers, including Micron (please do read this) which has done something really interesting with exploiting the inherent parallelism of memory, and Convey, which took a noteworthy dip in the specialty processor pool.

Although it doesn’t necessarily fit neatly into the mix, there was a lot of talk about quantum computing at the show. And of course, wild speculation about whether or not this “thing” from D-Wave can technically be called such given the entanglement questions. Again, this is an issue we’ll explore more in 2014, but suffice to say, the mainstream media has picked up on this idea in a big way, so expect a plethora of (creatively inaccurate and under-researched/informed) material about this topic. We’ll do what we can to stretch our brains in the coming year to deliver some perspective from its primary research leaders at D-Wave, Google, Lockheed Martin and others.

It’s Lustre’s Year to Shine

Lustre marks a great example of an HPC-born technology that is bound for great things in the larger enterprise world. A few of the forward-looking vendors are taking notice of this momentum and adding it to their offerings for reasons that scale past the orders they’re taking from X National Lab or university.

It seemed to make sense to mention it here because it was such an important part of many vendor offerings and more important, conversations with the very few potential end users who were cruising the floor shopping solutions (that’s another topic—where are all the end users at this show and how do we reel them in?). In the news edition of the SC13 wrapup jabber that will come out tomorrow evening the vendor spotlight will be on these announcements.

Denver Has Awesome Beer

That is all.

And Now, Talk Amongst Yourselves…

Please send along your thoughts (for publication or fun) about a few other topics that we noticed, including:

  • The range, depth and scale of the technical sessions is something to behold. For sys admins to center directors, it was hard to find something that wouldn’t appeal to someone. Impossible. Kudos to the SC committee who puts these sorts of things together.
  • How many storage vendors are there exactly? And how to differentiate?
  • Did you notice a difference in the show’s size or “bling” due to government shutdown?
  • Those student cluster kids are outstanding. Will you hire them?
  • Who had the best booth in terms of demonstrations?
  • Did it seem like there were more young people milling about than usual (or I am just so old now that everyone under 35 looks 25?).
  • Denver has awesome beer. New Orleans (SC14) is a better place to drink it.
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NSF Budget Approved for $8.3B in 2020, a 2.5% Increase

January 16, 2020

The National Science Foundation (NSF) has been spared a President Trump-proposed budget cut that would have rolled back its funding to 2012 levels. Congress passed legislation last month that sets the budget at $8.3 bill Read more…

By Staff report

NOAA Updates Its Massive, Supercomputer-Generated Climate Dataset

January 15, 2020

As Australia burns, understanding and mitigating the climate crisis is more urgent than ever. Now, by leveraging the computing resources at the National Energy Research Scientific Computing Center (NERSC), the U.S. National Oceanic and Atmospheric Administration (NOAA) has updated its 20th Century Reanalysis Project (20CR) dataset... Read more…

By Oliver Peckham

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of the countries in Europe, has signed a four-year, $89-million Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, the gold standard programming languages for fast performance Read more…

By John Russell

Quantum Computing, ML Drive 2019 Patent Awards

January 14, 2020

The dizzying pace of technology innovation often fueled by the growing availability of computing horsepower is underscored by the race to develop unique designs and application that can be patented. Among the goals of ma Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Andrew Jones Joins Microsoft Azure HPC Team

January 13, 2020

Andrew Jones announced today he is joining Microsoft as part of the Azure HPC engineering & product team in early February. Jones makes the move after nearly 12 years at the UK HPC consultancy Numerical Algorithms Gr Read more…

By Staff report

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

HPCwire Awards Highlight Supercomputing Achievements in the Sciences

January 7, 2020

In November at SC19 in Denver, the HPCwire Readers’ and Editors’ Choice awards program celebrated its 16th year of honoring remarkable achievements in high-performance computing. With categories ranging from Best Use of HPC in Energy to Top HPC-Enabled Scientific Achievement, many of the winners contributed to groundbreaking developments in the sciences. This editorial highlights those awards. Read more…

By Oliver Peckham

Blasts from the (Recent) Past and Hopes for the Future

December 23, 2019

What does 2020 look like to you? What did 2019 look like? Lots happened but the main trends were carryovers from 2018 – AI messaging again blanketed everything; the roll-out of new big machines and exascale announcements continued; processor diversity and system disaggregation kicked up a notch; hyperscalers continued flexing their muscles (think AWS and its Graviton2 processor); and the U.S. and China continued their awkward trade war. Read more…

By John Russell

ARPA-E Applies ML to Power Generation Designs

December 19, 2019

The U.S. Energy Department’s research arm is leveraging machine learning technologies to simplify the design process for energy systems ranging from photovolt Read more…

By George Leopold

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Summit Has Real-Time Analytics: Here’s How It Happened and What’s Next

October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laborat Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This