Making ‘Parallel Programming’ Synonymous with ‘Programming’

By John E. West

March 21, 2008

This week Intel and Microsoft announced their intention to fund two new university-based research centers focused on transforming the way programmers make use of multicore chips, and in the process enabling a whole new class of applications. The companies are optimistic that this effort will form the core of a radical transformation in the ways we use technology. All of this goodness will come from new ways to do something we’ve been focused on for the past 40 years: coercing more than one processing unit to work together to accomplish a single task.

The goal of the effort is to focus leading academic teams on the problem of effectively programming multicore processors. The research will focus on applications, architecture, and operating systems software as well as the software support infrastructure (compilers, languages, and so on) needed to express parallel work. An interesting aspect of this particular effort is that it brings together the leading hardware and software platforms in the market to look at the total solution.

In an interview with HPCwire Katherine Yelick, one of the principal investigators on the university team from UC Berkeley, said of the relationship, “This is one of the first times in my career when it actually feels like the major processor manufacturers might actually listen to people in terms of what they would like to make it easier to write parallel programs, or easier to get performance out of them.”

As HPCwire readers you are probably focused on high performance technical computing, and possibly use, provision, or build computers with at least hundreds of sockets. The principals in this project were careful to emphasize that HPTC is not the focus of this effort, and you should not expect MPI 3.0 to rise out of one of the centers. The focus is on mainstream computing and applications. In fact that word, “mainstream,” is repeated again and again in the official releases on the project.

The mainstream focus puts the emphasis on single-socket parallel programming. As Andrew Chien, vice president of the Corporate Technology Group and the director of Intel Research, said during the teleconference “a lot of the focus around how you deliver the promise of parallelism to a broad array of platforms in everything from servers down to laptops and small mobile devices is a lot about single socket parallelism, and that really is the primary focus of the UPCRC program.”

I would expect that the research developed by these centers will spur advancements in HPTC — after all, we’re all using the same chips, and some of the issues one faces in coordinating work among 100 cores on a single chip come up again when you connect 100 such chips together. In response to a question asked by the Seattle Post Intelligencer on Tuesday about who would own intellectual property rights to the products of research from the two universities, both Microsoft and Intel emphasized their commitment to open-sourcing the results, so the HPTC community should have access to a lot of this research as it develops.

The plan announced on Tuesday will devote $10 million from Intel and Microsoft to each of two Universal Parallel Computer Research Centers (UPCRC); a total investment of $20 million over 5 years. The centers were selected out of a pool of 25 universities in a competitive process, and both awardees have a long history of IT innovation.

The first center, to be housed at the University of California at Berkeley, will be headed up by David Patterson, one of the authors of The Landscape of Parallel Computing: A View from Berkeley, and one of the pioneers of RISC and RAID. The second center will be led by Marc Snir and Wen-mei W. Hwu at the University of Illinois at Urbana-Champaign. Snir is former head of the department of computer science at UIUC and leader/initiator of the IBM Blue Gene project while at TJ Watson before that. Hwu is the current chair of ECE at UIUC and director of the OpenIMPACT project. Both universities are adding their own funds to the effort, with UIUC chipping in $8 million. UC Berkeley has applied for $7 million from the state of California.

Yelick outlined the focus of the UC center along software, architecture, operating systems, and correctness problems. The software work is focused in two different layers, “…what we call the productivity layer, which we think is for most programmers to use, and an efficiency layer, which is for the parallelism and performance experts.” The productivity layer will use abstractions to hide much of the complexity of parallel programming, while the efficiency layer will let experts get at the details for maximum performance. During the teleconference Patterson broke these two audiences more colorfully into the “programming masses” and “ninja programmers.”

Snir indicated that the software portion of the UIUC center’s focus will be much more on the programming masses. As he put it during the teleconference, the goal for this effort is to “make ‘parallel programming’ synonymous with ‘programming’.”

At $20 million this project is billed by the participants as the “first joint industry and university research alliance of this magnitude in the United States focused on mainstream parallel computing.” Fair enough. But relative to the scale of the problem they’re trying to solve, and to the scale of the potential markets they hope to tap, the investment seems small to me. On the other hand the technology industry, and especially the information technology industry, has a history of making big advancements from small projects. The several million dollars invested in ARPANET in the late 1960s is roughly equivalent to $20 million today, and by most accounts that investment paid off pretty well.

John Markoff, writing in the New York Times on Wednesday, said that executives from Intel and Microsoft told him that this research was a step toward filling the funding void created by DARPA when it started shifting its funding away from universities and toward military and classified projects beginning in 2001. Dan Reed, director of scalable and multicore computing at Microsoft and who, along with Andrew Chien from Intel, will help manage the two centers for Intel and Microsoft, is quoted in that article as saying “The academic community has never really recovered from DARPA’s [sic] withdrawal.”

While it’s hard to argue that any step toward closing the science funding gap is a bad step, this is really just a drop in a very large empty bucket. Peter Harsha of Computing Research Association, writing at the CRA’s Policy Blog also on Wednesday, puts the decline in DARPA funding to universities at $91 million a year in unadjusted dollars between 2001 and 2004, with anecdotal indications that the gap has widened in the years since 2004.

The goal of the UPCRC project is ambitious, and as one would expect the language around the announcement of the initiative was full of hope and hype. Intel’s Andrew Chien said that this effort is expected to “help catalyze the long-term breakthroughs that are needed to enable dramatic new applications.” A similarly enthused Tony Hey, corporate vice president of External Research at Microsoft Research said that they “plan to explore the next generation of hardware and software to unlock the promise and the power of parallel computing and enable a change in the way people use technology.” Heady stuff.

We can forgive some of the hype as necessary to get attention in an increasingly target rich news feed. And we shouldn’t forget that computers have indeed dramatically transformed how we work and play, at least in the Western world. But, really. Let’s all take a deep breath and get to work rather than tossing love balloons into the air about how technology will finally lift us from the drudgery of the human condition and install us once and for all in a permanent state of joy.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ABB Upgrades Produce Up to 30 Percent Energy Reduction for HPE Supercomputers

June 6, 2020

The world’s supercomputers are currently allied in a common goal: defeating COVID-19. To analyze the billions upon billions of molecules that might produce helpful therapeutics (or even a vaccine), an unimaginable amou Read more…

By Oliver Peckham

Supercomputers Take to the Solar Winds

June 5, 2020

The whims of the solar winds – charged particles flowing from the Sun’s atmosphere – can interfere with systems that are now crucial for modern life, such as satellites and GPS services – but these winds can be d Read more…

By Oliver Peckham

HPC in O&G: Deep Sea Drilling – What Happens Now   

June 4, 2020

At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tell Read more…

By Rosemary Francis

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCSA’s next generation of supercomputers post-Blue Waters,” Read more…

By John Russell

Dell Integrates Bitfusion for vHPC, GPU ‘Pools’

June 3, 2020

Dell Technologies advanced its hardware virtualization strategy to AI workloads this week with the introduction of capabilities aimed at expanding access to GPU and HPC services via its EMC, VMware and recently acquired Read more…

By George Leopold

AWS Solution Channel

Join AWS, Univa and Intel for This Informative Session!

Event Date: June 18, 2020

More enterprises than ever are turning to HPC cloud computing. Whether you’re just getting started, or more mature in your use of cloud, this HPC Cloud webinar is an excellent opportunity to gain valuable insights and knowledge to help accelerate your HPC cloud projects. Read more…

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCS Read more…

By John Russell

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This