In an interesting twist on quantum-inspired work making its way into traditional HPC – and in this case a step further into cloud-based HPC – AWS today introduced Palace, short for PArallel, Large-scale Computational Electromagnetics, a parallel finite element code for full-wave electromagnetics simulations. Palace was first developed at the AWS Center for Quantum Computing to perform large-scale 3D simulations of complex electromagnetics models in the design of quantum computing hardware. While Palace can be used in quantum hardware design, AWS expects it to be used in a wide range of simulations.
Announced in a blog today, AWS researchers[i] wrote, “We are making Palace[ii] freely available on GitHub as an open-source project for electromagnetic modeling workloads, not limited to those in quantum computing, which users can run on systems ranging from their own laptops to supercomputers in the cloud. It was developed with support for the scalability and elasticity of the cloud in mind and to leverage the suite of cloud-based high-performance computing (HPC) products and services available on AWS.”
“[We] built Palace because while there exist many highly performant, open-source tools for a wide range of applications in computational physics, there are few open-source solutions for massively parallel, finite element-based computational electromagnetics. Palace supports a wide range of simulation types: eigenmode analysis, driven simulations in the frequency and time domains, and electrostatic and magnetostatic simulations for lumped parameter extraction. As an open-source project, it is also fully extensible by developers looking to add new features for problems of industrial relevance. Much of Palace is made possible by the MFEM finite element discretization library, which enables high-performance, scalable finite element research and application development,” according to the blog.
AWS says that Palace adds to the ecosystem of open-source software supporting cloud-based numerical simulation and HPC. Palace, which is still an active project, took about two years to develop, according to Sebastian Grimberg, an AWS senior research scientist and one of the blog authors who briefed HPCwire. It’s not uncommon for classical and quantum computing to pass algorithm learnings back and forth for solving specific applications – think optimization, for example. This was not that. This was more a byproduct of quantum researchers thinking about the tools needed for their research.

“We developed this because we saw a need for it that was specific to our work, where we want to design quantum hardware. But we realized there’s a lot of people doing simulation, who are considering very similar physics for different design applications that are outside of quantum computing. We recognized very early on that this (Palace) is very relevant to the community of people running large-scale simulations, large-scale physics-based simulations on HPC systems, whether they’re on premises or in the cloud,” Grimberg said.
Palace’s key features include[iii]:
- Eigenmode calculations with optional material or radiative loss including lumped impedance boundaries. Automatic postprocessing of energy-participation ratios (EPRs) for circuit quantization and interface or bulk participation ratios for predicting dielectric loss.
- Frequency domain driven simulations with surface current excitation and lumped or numeric wave port boundaries. Wideband frequency response calculation using uniform frequency space sampling or an adaptive fast frequency sweep algorithm.
- Explicit or fully-implicit time domain solver for transient electromagnetic analysis.
- Lumped capacitance and inductance matrix extraction via electrostatic and magnetostatic problem formulations.
- Support for a wide range of mesh file formats for structured and unstructured meshes, with built-in uniform or region-based parallel mesh refinement.
- Arbitrary high-order finite element spaces and curvilinear mesh support thanks to the MFEM library.
- Scalable algorithms for the solution of linear systems of equations, including geometric multigrid (GMG), parallel sparse direct solvers, and algebraic multigrid (AMG) preconditioners, for fast performance on platforms ranging from laptops to HPC systems.
Palace can run on a wide range of devices and on other clouds, such as Google and Azure, confirmed Grimberg. “I do a lot of prototyping on my laptop. We actually have some internal customers who have never run it on the cloud; they run it on their laptops because it’s easier to install that way and they can do things fast when dealing with small models. But we also have people using it who like me also need the scale of 1000s of cores. In that case, they’re running on AWS’s various EC2 instance types,” he said.
The blog notes that computational modeling typically requires scientists and engineers to make compromises between model fidelity, wall-clock time, and computing resources. Lately, of course, advances in cloud-based HPC resources have become nearly ubiquitous in the cloud community, which offers a wide range of commercial and internally-developed accelerators, CPUs, and high-speed interconnect.
“Palace leverages scalable algorithms and implementations from the scientific computing community, and aims to utilize the most recent advancements in computational infrastructure to deliver state-of-the-art performance. On AWS, this includes the Elastic Fabric Adapter (EFA) for fast networking and HPC-optimized Amazon Elastic Compute Cloud (EC2) instances using customized Intel processors or AWS Graviton processors for superior price-performance. Open-source software like Palace also allows users to exploit elastic cloud-based HPC to perform arbitrary numbers of simulations in parallel when exploring large parametric design spaces, unconstrained by proprietary software licensing models,” wrote Grimberg and his colleagues in the blog.
The AWS blog cited two project examples from work at its quantum center.
- The first example considers a common problem encountered in the design of superconducting quantum devices: the simulation of a single transmon qubit coupled to a readout resonator, with a terminated coplanar waveguide (CPW) transmission line for input/output. The superconducting metal layer is modeled as an infinitely thin, perfectly conducting surface on top of a c-plane sapphire substrate.
- The second example second example to demonstrate the capabilities and performance of Palace involves the simulation of a superconducting metamaterial waveguide based on a chain of lumped-element microwave resonators. This model is constructed in order to predict the transmission properties of the device presented in Zhang, et al., Science 379 (2023) [published] [pre-print].


“For all of the presented applications, we configured our cloud-based HPC cluster to compile and run Palace using GCC v11.3.0, OpenMPI v4.1.4, and EFA v1.21.0 on Amazon Linux 2 OS. We used COMSOL Multiphysics for the geometry preparation and mesh generation preprocessing in each case, but Palace is equipped to support a wide range of mesh file formats in order to accommodate a range of workflows including those utilizing entirely open-source software,” according to the blog.
It’s best to dig into the blog directly for more detail. That said, AWS issued performance metrics for each example, showcasing solid and the ability to scale project size and to conserve web-costs by choosing the granularity of the simulation needed.
In the first example, an eigenmode analysis is used to compute the linearized transmon and readout resonator mode frequencies, decay rates, and corresponding electric and magnetic field modes. Two finite element models are considered: a fine model with 246.2 million degrees of freedom, and a coarse model with 15.5 million degrees of freedom that differs by 1% in the computed frequencies as compared to the fine model.
“For each of the two models, we scale the number of cores used for the simulation in order to investigate the scalability of Palace on AWS when using a variety of EC2 instance types. Figure 3 plots the simulation wall-clock times and computed speedup factors for the coarse model, while Figure 4 plots them for the higher-fidelity fine model. We observe simulation wall-clock times under 1.5 minutes for the coarse model and close to 12 minutes for the fine model achieved with the scalability of EC2, even when solving a large-scale eigenvalue problem which is often relatively challenging to scale. Also of note is the improved performance of c7g.16xlarge instance type, featuring the latest generation AWS Graviton3 processor, over the previous generation c6gn.16xlarge, often matching the performance of the latest Intel-based instance types,” reported AWS.


Looking ahead, Grimberg – a numerical simulation specialist – thinks HPC-tools are likely to emerge from quantum researchers tinkering to tackle tough tasks. He recalled Palace’s modest beginning, “We needed to be faster and thought, hey, we’re at AWS, we have a huge amount of compute available to us. How can we leverage that to go faster? We didn’t have a lot of people at the time, but had lots of compute and thought how can we build workflows to accelerate all of the work we do in this early phase?”
Wonder what other tools might be taking shape back in the lab.
[i] Sebastian Grimberg, Senior Research Scientist, AWS Center for Quantum Computing, Hugh Carson, Applied Scientist, AWS Center for Quantum Computing, Andrew Keller, Research Science Manager, AWS Center for Quantum Computing
[ii] Palace is licensed under the Apache 2.0 license and is free to use by anyone from the broader numerical simulation and HPC community. Additional information can be found in the Palace GitHub repository, where you can file issues, learn about contributing to the project, or read the project documentation. The documentation includes a full suite of tutorial problems which guide you through the process of setting up and running simulations with Palace.
[iii] https://awslabs.github.io/palace/dev/