It’s been a big year for Arm. The new top supercomputer in the world, Fugaku, runs on the company’s chips, and just a few months ago, the chipmaker struck a $40 billion deal to sell itself to Nvidia, creating the potential for a new juggernaut able to go toe-to-toe with Intel and AMD. Now, Arm is making another big move: it’s migrating the vast majority of its electronic design automation (EDA) workloads to Amazon Web Services (AWS).
Arm’s EDA activities range from front-end design to simulation to final verification, among a wide variety of other activities centered on measuring the performance and behavior of any given design. For a typical SOC, these processes might take Arm months or years and incur massive computing costs.
Now, Arm is moving most of those workloads to AWS. Specifically, it’s moving them to AWS instances based on Graviton2 processors, which use Arm’s own Neoverse N1 cores. AWS has reported that those instances, announced a year ago and made available six months ago, offer substantial price and performance improvements compared to x86-based instances. Arm is hoping to leverage its processors’ performance benefits, in combination with the scalability of the cloud, to enable new capabilities like parallel simulations, reduced simulation time and additional testing cycles.
Overall, Arm is planning to reduce its datacenter footprint by at least 45 percent and its on-premises computing by 80 percent as a result of the AWS migration.
“Through our collaboration with AWS, we’ve focused on improving efficiencies and maximizing throughput to give precious time back to our engineers to focus on innovation,” said Rene Haas, president of Arm’s IP Products Group. “Now that we can run on Amazon EC2 using AWS Graviton2 instances with Arm Neoverse-based processors, we’re optimizing engineering workflows, reducing costs and accelerating project timelines to deliver powerful results to our customers more quickly and cost effectively than ever before.”
Arm is leveraging a series of AWS services, including Amazon Elastic Compute Cloud (EC2) to optimize workflows and AWS Compute Optimizer to deliver ML-driven instance recommendations for specific workflows. Arm is also using Databricks services to develop and run ML tools on AWS.
“AWS provides truly elastic high performance computing, unmatched network performance, and scalable storage that is required for the next generation of EDA workloads, and this is why we are so excited to collaborate with Arm to power their demanding EDA workloads running our high-performance Arm-based Graviton2 processors,” said Peter DeSantis, senior vice president of Global Infrastructure and Customer Support at AWS.
Arm is also scaling up its HPC offerings. A couple months ago, the company teased its roadmap for Arm’s forthcoming V1 and N2 Neoverse cores, which Arm expects will respectively deliver 50 percent and 40 percent higher single-threaded performance compared to the Neoverse N1 cores. The V1 cores will also offer support for scalable vector extensions (SVEs) – a boon for HPC applications.