Jay Lofstead from Sandia National Laboratories and Jakob Luettgau from the University of Tennessee gave a highly audience interactive session on Ethics in AI and High Performance Computing at the International Supercomputing Conference (ISC) in Hamburg.
By and large, rather than impressing their views onto the session and subsequently onto the audience, the moderators solicited questions and concerns from the audience with a roaming mic.
I started the audience Q&A by asking Lofstead about the ethical considerations with the adoption of generative AI, wondering how ethics could hope to prevail over the sheer amount of capital being poured into generative AI, with so many companies eager to adopt generative AI into their core workflows.
“How much potential legal liability are some of these companies exposing themselves to?” Lofstead asked in response. “Because they are doing direct willful copyright infringement. At $150,000 per infringement, even if there is 1% or less of their model training infringing, they’re still in the tens of trillions of dollars in legal liability.”
“So to me it’s kind of entertaining they are exposing themselves to the risk of having to give their entire company to the government in penalties, just because they refuse to say what they’re doing,” Lofstead continued. “There are a lot of issues in what they are doing legally, and more issues in trying to deal with the impacts on artists in such that they are willfully taking things they don’t have license to use yet.”
But Lofstead was, of course, concerned about more than the legal issues.
“The other impacts are how this is going to change society ultimately. I don’t think there is now ever going to be a time where we don’t have them,” Lofstead said. “Trying to resolve the legal liability issues and then the societal issues about what do with artists and really the creative types in general is just something that really needs to be resolved.”
From there, the conversation went to the roaming mic, with the audience exploring a number of topics. Some interesting selected quotes, loosely organized, are included below.
On HPC’s ethical responsibilities (or lack thereof): “We should look at ethics in HPC much more in the terms of whether we are enabling evil things to happen.”
“Evil is more of a society wide question, not an HPC question. It is not up to HPC to determine what is evil and what is not.”
“What is the scope of HPC in ethics? When thinking about commercial opportunities – how can one control which commercial opportunities are going to leverage HPC. There is only so much within HPC’s field, where HPC ultimately ends up being used.”
“This isn’t a research field problem. All HPC is doing is supplying the systems provided. This is an academia problem.”
On weaponization of computing: “Will quantum computing be regulated the way nuclear responsibilities have been regulated throughout HPC’s history? With emerging quantum/HPC clusters forming around Europe and the world, will they be globally regulated as the nuclear community is regulated? What is keeping quantum research from becoming a weapon?”
On considering other regions of the world: “There are going to be different regional, societal, cultural values imposing their own interpretation of ethics. How will those regional values be balanced within a global standard?”
“What obligation do we have to other parts of the globe that do not have access to HPC?”
“What can we do as a community to help those parts of the globe that do not have access?”
On lessons from other fields: “What can we learn from other fields? We’re missing voices in this discussion from other fields outside of science. We would be better informed to bring those from humanities into the discussion.”
On HPC’s carbon footprint: “Academic research should be prioritized in large-scale HPC systems, as these systems are looking for answers in scientific rigor. HPC can be abused for those focusing on advertising and marketing, and private enterprise. We shouldn’t worry about the ecological footprint for HPC as long as it is science- and academically focused, as these foci serve a useful purpose.”
On ever-larger HPC systems: “Do we really need bigger systems year over year over year?”
“Not sure we’ll much longer have the power or capital to develop our own individual solutions.”
“Isn’t the transition to AI- and ML-driven solutions over an expanding HPC footprint already happening?”
By design, the moderators (for the most part) didn’t attempt to resolve these questions, or guide the concerns to resolution. This leaves the concerns and questions of those in the audience open-ended, remaining for the HPC community to resolve.