Across the world, solar energy continues to boom, comprising larger and larger shares of the countries’ energy mixes as costs per megawatt-hour continue to decline. But photovoltaic solar power plants and home installations still require large upfront investments and take a long time to pay themselves off, slowing the growth of renewable energy. Now, researchers from Berkeley Lab, the National Energy Research Scientific Computing Center (NERSC), Carnegie Mellon and other universities have joined forces to leverage the power of exascale computing to improve photovoltaic efficiency.
The researchers are predominantly interested in finding new materials for photovoltaic solar cells that can enable “singlet fission,” which would increase the energy efficiency of a panel. Testing new materials for the necessary properties experimentally is a gargantuan task, so the researchers have been using a materials science simulation package called BerkeleyGW to predict those properties across a wide range of materials.
“We can simulate these material properties, use computation to perform screening of the possibilities and pick what we think are the best candidates, then send them to the lab for testing,” said Mauro Del Ben, a research scientist with Berkeley Lab’s Computational Research Division (CRD), in an interview with NERSC. “Since we are looking for excited states in these materials, we need a level of accuracy that goes beyond what’s currently available, and that’s where BerkeleyGW comes in.”
There’s just one problem: BerkeleyGW is computationally intensive, requiring large time allocations on powerful machines. So the research coalition has been working to optimize the code’s performance, improving its parallelization and leveraging accelerators (such as GPUs) to ensure that BerkeleyGW runs as lightly as possible on larger and larger machines.
“While the GW computational approach implemented in BerkeleyGW is highly accurate, it was often considered expensive in terms of computer time required to run the code,” said Jack Deslippe, principal developer of the BerkeleyGW code. “For this collaboration, our team has optimized BerkeleyGW so that it is not only an accurate predictive tool but also scales to peak performance on modern architectures, which allows researchers to analyze up to several thousands of atoms—something that was previously impossible.”
The researchers are optimizing BerkeleyGW with an eye toward Aurora, which is slated to become the first exascale system in the U.S. when it is delivered to Argonne National Laboratory in 2021. In the interim, they are testing and optimizing the code using a trio of supercomputers: Argonne’s Theta system, NERSC’s Cori system and Oak Ridge National Laboratory’s Summit system, which is currently the most powerful publicly ranked supercomputer in the world.
To read the NERSC article discussing this research, click here.