Researchers Link Supercomputer Simulations and Material Fabrication to Advance Light-related Devices

March 14, 2023

March 14, 2023 — Scientists constantly identify and qualify new materials, seeking substances with improved properties for use in everything from cars to space capsules.

An artistic rendering of a novel nitride perovskite predicted by a team at Argonne National Laboratory that designs new materials to harvest solar energy. Image credit: Viet-Anh Ha.

Getting those compounds out into the world, however, is arduous, says the University of Texas at Austin’s Feliciano Giustino. “People realized that from a new material being discovered in a lab to using it in devices – everyday consumer kinds of applications – takes several decades,” he says.

The decades-long lag in materials’ acceptance arises mostly because, Giustino explains, “once something has been made in the lab, there are many issues to sort out, and there are problems with scalability in the processes and integration.”

Even devising an idea for a new advanced material stalls the process. “Finding materials is very challenging because we have a periodic table with more than 100 elements that can be used, and combinations of those are on the order of hundreds of millions of materials,” says Giustino, who holds the W. A. “Tex” Moncrief Jr. Chair in Quantum Materials Engineering at UT Austin.

To accelerate the process, the Materials Genome Initiative (MGI) was launched in 2011. The MGI’s goal: to shorten advanced materials’ path to the market at a fraction of the cost by harnessing the power of data and computational tools in concert with experiment.

Giustino focuses on improving materials that can be used in energy-related applications. For example, he and his colleagues received an ASCR Leadership Computing Challenge allocation on the Argonne Leadership Computing Facility’s petascale supercomputer, Theta, to design new semiconductors. The work also is supported by the DOE’s Basic Energy Sciences program.

Giustino’s atom-scale simulations incorporate quantum mechanics, the strange physics that govern matter at the tiniest scales. That’s a change from decades ago, when that approach was merely an intellectual exercise in fundamental science. Now, Giustino says, “we have equations that can be solved to predict properties of real materials with an accuracy that can match experimental data, even before experiments are realized.”

The work starts with a hypothesis for a new material that could exist or might not exist. The researchers then calculate what the proposed material would do.

First: computing the material’s stability. “If somebody makes it, will it decompose into some byproducts?” Giustino asks. If computations suggest a material would be stable, the researchers move on to other properties. If it’s a solar cell material, for example, researchers would want to know whether it efficiently absorbs light and converts it into electricity.

Feliciano Giustino

More specifically, Giustino determines if the material absorbs light in the solar-emission spectrum, which is mostly visible and infrared light, and how efficiently it could convert those photons into electrical currents. “All of this can be translated into very clean mathematical requirements.”

With those equations in hand, calculations can begin. “To predict these properties at the atomic scale, you have to solve equations similar to the Schrödinger equation of quantum mechanics, and for real materials that becomes very challenging,” Giustino says. “So facilities like Argonne National Lab are essential to carry out the simulation because these calculations cannot be performed on a laptop or a desktop.”

Designing and calculating a potentially beneficial material’s properties is just the start. If Giustino’s team discovers something interesting, it connects with experimental groups that take the next step: attempting to create the material in the lab.

If absorbing light can make electricity, the reverse is also possible. A light-emitting diode (LED), for example, performs this trick. Giustino worked on an LED based on lead-iodide perovskites, which efficiently turn electricity into light. That material, though, suffered from a crucial problem: lead’s neurotoxicity.

Giustino’s team accepted the challenge of designing a similar lead-free material. “We basically tried to replace lead by checking every single element,” he says, “and that led to the discovery of three or four compounds that were good candidates.”

One of those, a perovskite composed of cesium, indium, silver and chlorine (Cs2InAgCl6), met the desired criteria, and Giustino and his colleagues synthesized it. As the team noted, adding sodium produced a material with “extraordinary photoluminescence in the visible range.” The new material makes stable, single-emitter-based LEDs that converted nearly all the energy into white light.

The gigantic collection of calculations required to simulate new-material properties will dig even deeper with new HPC systems. Giustino is preparing to improve his work with Aurora, an exascale system the Argonne Leadership Computing Facility is installing. This computer, a thousand times faster than Theta, will combine standard processors with graphics processing units to accelerate calculations.

“We are making an effort to port our software to enable it for Aurora,” Giustino says. “For that, the experts at Argonne are extremely valuable because they will have deployed the machine, tested it and know the bottlenecks that we should look into to improve our codes.”

But a more advanced computer is not enough. “All these machines are awesome, but everybody needs to understand that it’s not just about buying a computer,” Giustino explains. “It’s about training the personnel and having enough funding to make it possible for software to evolve at the same pace.”

Not far ahead, Giustino envisions a more automated workflow. “I think we might be able to deliver integrated solutions that really don’t compromise on accuracy,” he says. “Then we can make a real impact on material science, and that’s the ultimate goal.”


Source: U.S. Dept. of Energy Office of Science

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire