When we launched the Elastic Fabric Adapter (EFA) at re:Invent 2018, we delivered on a goal of accelerating computational fluid dynamics (CFD) and weather applications in Amazon EC2, without sacrificing the elasticity, regional availability, cost, and instance choice that makes EC2 so popular. At launch, EFA was available on C5n and P3dn instance types in 5 regions.
Performance at scale was a step-function improvement, as you can tell from Figure 1. Today, EFA is available on 33 instance types powered by Intel, AMD, and AWS Graviton processors with multiple memory, disk, and accelerator configurations, and at least one EFA-enabled instance type is available in every AWS Region.
The use cases for EFA have expanded to include large-scale distributed machine learning training and real-time uncompressed high-definition video steaming.

We continued to iterate on EFA’s performance and capabilities in the last 4 years, and today want to talk about the second generation of EFA.
Many of the improvements discussed in this post are already available to customers on any instance that supports EFA, but the recently-released Trn1 instance type is the first time that we have brought all the pieces together in a single place. This iterative approach—deploying improvements to EFA on existing instances as the improvements are developed—is critical to how we approach EFA development. Our customers are constantly finding new use cases – and we don’t wait for the next instance generation to address those customer’s needs.
Distributed training
An example of this iterative development process is distributed machine learning training on P4d instances. Two years ago, the majority of machine learning training was using a data-parallel model across a small number of instances, with communication primarily consisting of Allreduce operations multiple gigabytes in size. However, the machine learning community has adopted larger-scale and multiple levels of parallelism, which greatly changed the communication pattern in a way that challenged EFA’s capabilities.
Over the last year on the same P4d hardware, we improved the performance of small and medium message sizes by up to 50% (Figure 2). This work has resulted in observed performance improvements of over 18% for Fully Sharded Data Parallel (FSDP), a popular PyTorch Distributed Training library, and over 8% for Megatron-LM, Nvidia’s open source distributed training library (Figure 3).


Second generation improvements
The second generation of EFA provides another step function in application performance, especially for machine learning applications. For very small collective operations with accelerators like GPUs or AWS Tranium, second generation EFA provides an additional 50% communication-time improvement over the first generation EFA available on P4d. At the same time, we have doubled the throughput of each AWS Nitro system card which hosts the EFA devices, which allowed us to improve large-message collective performance and average latency.
The following sections discuss the improvements we’ve made to the EFA project since the first generation of EFA launched. While subsets of the improvements are available on any EFA-enabled instance, it is only with second generation EFA that all the improvements are available in one place.
AWS Nitro System hardware improvements
The second generation of EFA starts with new hardware: an updated Nitro System card that improves network performance. Endpoint latency – the portion of latency caused by the NIC/host software instead of network cables and switches – is reduced by 30%. And at the same time, available bandwidth per Nitro card has jumped from 100 Gbps to 200 Gbps, with twice the PCIe bandwidth to help keep the network busy.
Second generation EFA also greatly improves support for moving data directly between accelerator memories (like those on AWS Trainiums or GPUs) – improving distributed machine learning training applications. In the first generation of EFA, we added an RDMA read semantic to support NCCL communication. But in second generation EFA, we’ve added a more complete RDMA interface, allowing for more complex completion-semantics (like the low latency LL/LL128 protocols in Nvidia’s NCCL implementation) which further lowers communication time. The new RDMA interface is also available for HPC applications using MPI, improving throughput when there are a small number of communicating MPI processes per instance. This is important for supporting hybrid OpenMP / MPI applications.
Software Improvements
On any network, quite a bit of software sits between an HPC or ML application and the network device. In the case of EFA, that includes a kernel module, a package called Libfabric that provides a portable programming interface to RDMA-like network cards, and MPI or NCCL packages. As part of our efforts to improve application performance, we have touched every one of these pieces of software…
Read the full blog to learn more. Reminder: You can learn a lot from AWS HPC engineers by subscribing to the HPC Tech Short YouTube channel, and following the AWS HPC Blog channel.