At SC21, Plenary Wrestles with the Ethics of Mainstreamed HPC

By Oliver Peckham

November 17, 2021

As the panelists gathered onstage for SC21’s first plenary talk, the so-called Peter Parker principle – “with great power comes great responsibility” – cycled across the background slideshow. For the following hour, five panelists confronted this dilemma: with the transformative power of HPC (and, in particular, HPC-enabled AI) increasingly mainstreamed and deployed by all major sectors of society, industry and government, what ethical responsibilities are conferred to whom, and how can those responsibilities be fulfilled?

From left to right: Dan Reed; Cristin Goodwin; Tony Hey; Ellen Ochoa; and Joel Saltz.

The plenary, titled “The Intersection of Ethics and HPC,” featured five speakers: Dan Reed, professor of computer science and senior vice president of Academic Affairs at the University of Utah, who moderated the discussion; Cristin Goodwin, general manager and associate general counsel for Microsoft; Tony Hey, a physicist and chief data scientist for Rutherford Appleton Laboratory; Ellen Ochoa, chair of the National Science Board and former astronaut and director of NASA’s Johnson Space Center; and Joel Saltz, an MD-PhD working as a professor and chair of the Department of Biomedical Informatics at Stony Brook University.

“We know that advanced computing now pervades almost every aspect of science, technology, business, and society. Think about its impacts on financial institutions, e-commerce, communications, logistics, health, national security … And big tech overall has been in the news lately – and not necessarily in a good way,” Reed opened, citing a Pandora’s box of issues ranging from the effects of social media and data breaches to deepfakes and autonomous vehicles.

Unintended consequences and unethical actors

“Technology,” he continued, “is also being exploited at scale,” with governments and criminals leveraging high-power surveillance and intrusion tools to great effect. Beyond the national security applications and implications, HPC has also become tightly tied to competitiveness for businesses and to the state-of-the-art for forward-facing fields like medicine and consumer technology. HPC, Reed pointed out, is just the latest field to go through this tumultuous adolescence: fields like physics and medicine had experienced similar ethical dilemmas as their capabilities expanded.

As a physicist, Hey agreed, invoking perhaps the most famous step change in the ethical onus on a scientific field. “I think the outstanding example is the Manhattan Project, which developed the atomic bomb during the war,” he said. The Manhattan Project, he explained, had been initiated due to the fear of an unethical actor – Hitler – who likely would not have hesitated to use such a weapon were it in his possession. “That was the original motivation. But actually before they tested their nuclear weapons that they’d developed in the Manhattan Project, Germany had surrendered. So the original reason had gone,” he said – leaving the scientists to wrestle with their creation. “And I think, really, you can almost replace ‘nuclear weapon technology’ with ‘AI technology.’ You can’t uninvent it, and we can be ethical about our use, but we’ll have enemies who aren’t.”

These enemies, and wantonly unethical actors generally, were the subject of much discussion. Goodwin, who works to address nation-state attacks at Microsoft, said that while cyberattacks by nation-states were once considered unlikely force majeure events, they’re now commonplace: “Microsoft, between the period of August 2018 and this past July, notified over 20,000 customers of nation-state attacks,” she said.

“In my space, what I see all the time is the paradigm of unethical abuse,” she added, contrasting that with the paradigm of ethical use. “How are you thinking about abuse? The September 12th cockpit door? … What are the ways your technology could be abused?” This issue, she said, was particularly spotlit in the wake of Microsoft’s ill-fated chatbot, Tay.

“Many people know that Microsoft back in 2016 had released a chatbot, and you could interact on Twitter and it would respond back to you,” she recapped. “And in about 24 hours it turned into a misogynistic Nazi and we took it down very, very quickly. And that forced Microsoft to go and look very very closely at how we think about ethics and artificial intelligence. It prompted us to create an office of responsible AI and a principled approach to how we think about that.”

This kind of unanticipated reappropriation or redirection of a technology – somewhat limited in scope, though offensive, when applied to a chatbot – becomes much more ominous as the technologies expand. Saltz advised the audience to “look beyond what [the] specific application is,” citing the relatively straightforward introduction of telehealth – which is now spiraling into the use of AI facial and body recognition to, in combination with medical records, make predictions for a patient’s health during a telehealth appointment. “Pretty much every new technical advance, even if it seems relatively limited, can be extended – and is being extended – to something more major,” he said.


“Pretty much every new technical advance, even if it seems relatively limited, can be extended – and is being extended – to something more major.”


Uninclusive models and unsuitable solutions

On the topic of unintended consequences, several of the panelists expressed concern over the bias that can be conferred – often accidentally – to AI models and their predictions through improper design and training. Ochoa referenced a famous case where an AI model was used to predict recidivism in sentencing, which, she said, resulted in the AI essentially “predicting” where police were deployed. “These things can creep in at various different areas, but they’re being used so broadly – they’re really affecting people’s lives,” she said.

Indeed, much of this bias can be attributed to sample selection. To that point, Saltz spoke on the use of HPC-powered models to aid in diagnosis, prognosis and treatment. “Medicine is a particular font of ethical dilemmas, there’s no doubt – and increasingly, these involve high-end computing and computational abilities,” he said. Do you recommend an intensive, scorched-earth treatment for a patient to give them the best chance of beating cancer, or do you recommend a less taxing treatment because they’re unlikely to require anything more severe to recover? “So, models can predict this,” Saltz said. “On the other hand: can models predict this? This is a major technical issue as well as an ethical issue.” One of the main issues, again: if a particular population was used for training, how do you generalize that model? Should you?


“Models can predict this. On the other hand: can models predict this?”


Saltz had a couple of ideas for how to ameliorate these problems – starting with the collection of more data from more groups. “As human beings, we should encourage medical research and make our data available,” he said. “There’s a lot of work associated with this, but I think that convincing the citizenry that there really is a potential huge upside to participating in research studies and making their data available will be very important to enhance medical progress.”

Second, he said, was validating the algorithms. “The FDA has a project dedicated to validation of AI in medicine that we’re involved in,” he explained. “The notion is that there’d be a well-defined path so that developers of algorithms can know when their algorithm has been deemed good enough to be reliable.”

“I think that also speaks to the notion that you want a diverse community looking at those issues,” Reed said, “because they will surface things that a less diverse community might not.” Recommendations, too, can be asymmetrical: Hey explained how fine-grain tornado prediction enabled disaster agencies to recommend fewer evacuations along a more specific path, but that for groups that might need longer to evacuate – such as people with disabilities, or older people – that more targeted, quicker-response approach might be unsuitable. “These things require great consideration of the people who are affected,” he said.

Unfathomable explanations and unrepresentative gatherings

One core problem pervades nearly all efforts to reinforce ethics in HPC and AI: comprehension.

“We are a vanishingly small fraction of the population – so how do we think about informed debate and understanding with the broader community about these complex issues?” Reed said. “Because explaining to someone that, ‘this is a multidisciplinary model with some abstractions based on AI and some inner loops, and we’ve used a numerical approximation technique with variable precision arithmetic on a million-core system with some variable error rate, and now talk to me about whether this computation is right’ – that explanation is dead on arrival to the people who would care about how these systems are actually used.”


“That explanation is dead on arrival to the people who would care about how these systems are actually used.”


Goodwin said that getting users, stakeholders and the general public to understand the implications of technological developments or threats was something that Microsoft had been wrestling with for some time. “We have context analysts that help us simplify the way we talk about what we’ve learned so that communities that are not technical or not particularly comfortable with technical terms can consume that,” she said. “What we believe is that you can’t have informed public policy if you can’t take the technical detail of an attack and make it relatable for those who need to understand that.”

When talking about communication between HPC or AI insiders and the general public, of course, it’s important to note the differences between those two groups – differences that span demographics, not just credentials. “The attendees at this conference are not broadly representative of our population,” Reed said, gently, looking out at the audience.

Ochoa followed up on that thread, discussing efforts to fold in the “missing millions” that are often left unrepresented by gatherings of or decisions by technically skilled, demographically similar experts.

“We try to make sure we’re not doing anything discriminatory, right?” she said. “But ‘welcoming’ is actually much broader than that.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

AWS Arm-based Graviton3 Instances Now in Preview

December 1, 2021

Three years after unveiling the first generation of its AWS Graviton chip-powered instances in 2018, Amazon Web Services announced that the third generation of the processors – the AWS Graviton3 – will power all-new Amazon Elastic Compute 2 (EC2) C7g instances that are now available in preview. Debuting at the AWS re:Invent 2021... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies participated and, one of them, Graphcore, even held a separ Read more…

HPC Career Notes: December 2021 Edition

December 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

AWS Solution Channel

Running a 3.2M vCPU HPC Workload on AWS with YellowDog

Historically, advances in fields such as meteorology, healthcare, and engineering, were achieved through large investments in on-premises computing infrastructure. Upfront capital investment and operational complexity have been the accepted norm of large-scale HPC research. Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

Raja Koduri and Satoshi Matsuoka Discuss the Future of HPC at SC21

November 29, 2021

HPCwire's Managing Editor sits down with Intel's Raja Koduri and Riken's Satoshi Matsuoka in St. Louis for an off-the-cuff conversation about their SC21 experience, what comes after exascale and why they are collaborating. Koduri, senior vice president and general manager of Intel's accelerated computing systems and graphics (AXG) group, leads the team... Read more…

Jack Dongarra on SC21, the Top500 and His Retirement Plans

November 29, 2021

HPCwire's Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the 2021 Top500 list, the outlook for global exascale computing, and what exactly is going on in that Viking helmet photo. Read more…

SC21: Larry Smarr on The Rise of Supernetwork Data Intensive Computing

November 26, 2021

Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a... Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire