At SC21, Plenary Wrestles with the Ethics of Mainstreamed HPC

By Oliver Peckham

November 17, 2021

As the panelists gathered onstage for SC21’s first plenary talk, the so-called Peter Parker principle – “with great power comes great responsibility” – cycled across the background slideshow. For the following hour, five panelists confronted this dilemma: with the transformative power of HPC (and, in particular, HPC-enabled AI) increasingly mainstreamed and deployed by all major sectors of society, industry and government, what ethical responsibilities are conferred to whom, and how can those responsibilities be fulfilled?

From left to right: Dan Reed; Cristin Goodwin; Tony Hey; Ellen Ochoa; and Joel Saltz.

The plenary, titled “The Intersection of Ethics and HPC,” featured five speakers: Dan Reed, professor of computer science and senior vice president of Academic Affairs at the University of Utah, who moderated the discussion; Cristin Goodwin, general manager and associate general counsel for Microsoft; Tony Hey, a physicist and chief data scientist for Rutherford Appleton Laboratory; Ellen Ochoa, chair of the National Science Board and former astronaut and director of NASA’s Johnson Space Center; and Joel Saltz, an MD-PhD working as a professor and chair of the Department of Biomedical Informatics at Stony Brook University.

“We know that advanced computing now pervades almost every aspect of science, technology, business, and society. Think about its impacts on financial institutions, e-commerce, communications, logistics, health, national security … And big tech overall has been in the news lately – and not necessarily in a good way,” Reed opened, citing a Pandora’s box of issues ranging from the effects of social media and data breaches to deepfakes and autonomous vehicles.

Unintended consequences and unethical actors

“Technology,” he continued, “is also being exploited at scale,” with governments and criminals leveraging high-power surveillance and intrusion tools to great effect. Beyond the national security applications and implications, HPC has also become tightly tied to competitiveness for businesses and to the state-of-the-art for forward-facing fields like medicine and consumer technology. HPC, Reed pointed out, is just the latest field to go through this tumultuous adolescence: fields like physics and medicine had experienced similar ethical dilemmas as their capabilities expanded.

As a physicist, Hey agreed, invoking perhaps the most famous step change in the ethical onus on a scientific field. “I think the outstanding example is the Manhattan Project, which developed the atomic bomb during the war,” he said. The Manhattan Project, he explained, had been initiated due to the fear of an unethical actor – Hitler – who likely would not have hesitated to use such a weapon were it in his possession. “That was the original motivation. But actually before they tested their nuclear weapons that they’d developed in the Manhattan Project, Germany had surrendered. So the original reason had gone,” he said – leaving the scientists to wrestle with their creation. “And I think, really, you can almost replace ‘nuclear weapon technology’ with ‘AI technology.’ You can’t uninvent it, and we can be ethical about our use, but we’ll have enemies who aren’t.”

These enemies, and wantonly unethical actors generally, were the subject of much discussion. Goodwin, who works to address nation-state attacks at Microsoft, said that while cyberattacks by nation-states were once considered unlikely force majeure events, they’re now commonplace: “Microsoft, between the period of August 2018 and this past July, notified over 20,000 customers of nation-state attacks,” she said.

“In my space, what I see all the time is the paradigm of unethical abuse,” she added, contrasting that with the paradigm of ethical use. “How are you thinking about abuse? The September 12th cockpit door? … What are the ways your technology could be abused?” This issue, she said, was particularly spotlit in the wake of Microsoft’s ill-fated chatbot, Tay.

“Many people know that Microsoft back in 2016 had released a chatbot, and you could interact on Twitter and it would respond back to you,” she recapped. “And in about 24 hours it turned into a misogynistic Nazi and we took it down very, very quickly. And that forced Microsoft to go and look very very closely at how we think about ethics and artificial intelligence. It prompted us to create an office of responsible AI and a principled approach to how we think about that.”

This kind of unanticipated reappropriation or redirection of a technology – somewhat limited in scope, though offensive, when applied to a chatbot – becomes much more ominous as the technologies expand. Saltz advised the audience to “look beyond what [the] specific application is,” citing the relatively straightforward introduction of telehealth – which is now spiraling into the use of AI facial and body recognition to, in combination with medical records, make predictions for a patient’s health during a telehealth appointment. “Pretty much every new technical advance, even if it seems relatively limited, can be extended – and is being extended – to something more major,” he said.


“Pretty much every new technical advance, even if it seems relatively limited, can be extended – and is being extended – to something more major.”


Uninclusive models and unsuitable solutions

On the topic of unintended consequences, several of the panelists expressed concern over the bias that can be conferred – often accidentally – to AI models and their predictions through improper design and training. Ochoa referenced a famous case where an AI model was used to predict recidivism in sentencing, which, she said, resulted in the AI essentially “predicting” where police were deployed. “These things can creep in at various different areas, but they’re being used so broadly – they’re really affecting people’s lives,” she said.

Indeed, much of this bias can be attributed to sample selection. To that point, Saltz spoke on the use of HPC-powered models to aid in diagnosis, prognosis and treatment. “Medicine is a particular font of ethical dilemmas, there’s no doubt – and increasingly, these involve high-end computing and computational abilities,” he said. Do you recommend an intensive, scorched-earth treatment for a patient to give them the best chance of beating cancer, or do you recommend a less taxing treatment because they’re unlikely to require anything more severe to recover? “So, models can predict this,” Saltz said. “On the other hand: can models predict this? This is a major technical issue as well as an ethical issue.” One of the main issues, again: if a particular population was used for training, how do you generalize that model? Should you?


“Models can predict this. On the other hand: can models predict this?”


Saltz had a couple of ideas for how to ameliorate these problems – starting with the collection of more data from more groups. “As human beings, we should encourage medical research and make our data available,” he said. “There’s a lot of work associated with this, but I think that convincing the citizenry that there really is a potential huge upside to participating in research studies and making their data available will be very important to enhance medical progress.”

Second, he said, was validating the algorithms. “The FDA has a project dedicated to validation of AI in medicine that we’re involved in,” he explained. “The notion is that there’d be a well-defined path so that developers of algorithms can know when their algorithm has been deemed good enough to be reliable.”

“I think that also speaks to the notion that you want a diverse community looking at those issues,” Reed said, “because they will surface things that a less diverse community might not.” Recommendations, too, can be asymmetrical: Hey explained how fine-grain tornado prediction enabled disaster agencies to recommend fewer evacuations along a more specific path, but that for groups that might need longer to evacuate – such as people with disabilities, or older people – that more targeted, quicker-response approach might be unsuitable. “These things require great consideration of the people who are affected,” he said.

Unfathomable explanations and unrepresentative gatherings

One core problem pervades nearly all efforts to reinforce ethics in HPC and AI: comprehension.

“We are a vanishingly small fraction of the population – so how do we think about informed debate and understanding with the broader community about these complex issues?” Reed said. “Because explaining to someone that, ‘this is a multidisciplinary model with some abstractions based on AI and some inner loops, and we’ve used a numerical approximation technique with variable precision arithmetic on a million-core system with some variable error rate, and now talk to me about whether this computation is right’ – that explanation is dead on arrival to the people who would care about how these systems are actually used.”


“That explanation is dead on arrival to the people who would care about how these systems are actually used.”


Goodwin said that getting users, stakeholders and the general public to understand the implications of technological developments or threats was something that Microsoft had been wrestling with for some time. “We have context analysts that help us simplify the way we talk about what we’ve learned so that communities that are not technical or not particularly comfortable with technical terms can consume that,” she said. “What we believe is that you can’t have informed public policy if you can’t take the technical detail of an attack and make it relatable for those who need to understand that.”

When talking about communication between HPC or AI insiders and the general public, of course, it’s important to note the differences between those two groups – differences that span demographics, not just credentials. “The attendees at this conference are not broadly representative of our population,” Reed said, gently, looking out at the audience.

Ochoa followed up on that thread, discussing efforts to fold in the “missing millions” that are often left unrepresented by gatherings of or decisions by technically skilled, demographically similar experts.

“We try to make sure we’re not doing anything discriminatory, right?” she said. “But ‘welcoming’ is actually much broader than that.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

LLNL Engineers Harness Machine Learning to Unlock New Possibilities in Lattice Structures

September 9, 2024

Lattice structures, characterized by their complex patterns and hierarchical designs, offer immense potential across various industries, including automotive, aerospace, and biomedical engineering. With their outstand Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, integrated, and secured data. Now scientists working at univer Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently posted the following on X/Twitter: "This weekend, the @xA Read more…

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with performance benchmarks. In the first paper, Understanding Data Mov Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Shutterstock 1622080153

AWS Perfects Cloud Service for Supercomputing Customers

August 29, 2024

Amazon's AWS believes it has finally created a cloud service that will break through with HPC and supercomputing customers. The cloud provider a Read more…

HPC Debrief: James Walker CEO of NANO Nuclear Energy on Powering Datacenters

August 27, 2024

Welcome to The HPC Debrief where we interview industry leaders that are shaping the future of HPC. As the growth of AI continues, finding power for data centers Read more…

CEO Q&A: Acceleration is Quantinuum’s New Mantra for Success

August 27, 2024

At the Quantum World Congress (QWC) in mid-September, trapped ion quantum computing pioneer Quantinuum will unveil more about its expanding roadmap. Its current Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire