AWS Announces General Availability of Amazon EC2 P4d Instances with EC2 UltraClusters Capability

November 2, 2020

SEATTLE, Nov. 2, 2020– Today, Amazon Web Services, Inc. (AWS), an Amazon.com company, announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P4d instances, the next generation of GPU-powered instances delivering 3x faster performance, up to 60% lower cost, and 2.5x more GPU memory for machine learning training and high-performance computing (HPC) workloads when compared to previous generation P3 instances. P4d instances feature eight NVIDIA A100 Tensor Core GPUs and 400 Gbps of network bandwidth (16x more than P3 instances). Using P4d instances with AWS’s Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access), customers are able to create P4d instances with EC2 UltraClusters capability. With EC2 UltraClusters, customers can scale P4d instances to over 4,000 A100 GPUs (2x as many as any other cloud provider) by making use of AWS-designed non-blocking petabit-scale networking infrastructure integrated with Amazon FSx for Lustre high performance storage, offering on-demand access to supercomputing-class performance to accelerate machine learning training and HPC. To get started with P4d instances visit: https://aws.amazon.com/ec2/instance-types/p4

Data scientists and engineers are continuing to push the boundaries of machine learning by creating larger and more-complex models that provide higher prediction accuracy for a broad range of use cases, including perception model training for autonomous vehicles, natural language processing, image classification, object detection, and predictive analytics. Training these complex models against large volumes of data is a very compute, network, and storage intensive task and often takes days or weeks. Customers not only want to cut down on the time-to-train their models, but they also want to lower their overall spend on training. Collectively, long training times and high costs limit how frequently customers can train their models, which translates into a slower pace of development and innovation for machine learning.

The increased performance of P4d instances speeds up the time to train machine learning models by up to 3x (reducing training time from days to hours) and the additional GPU memory helps customers train larger, more complex models. As data becomes more abundant, customers are training models with millions and sometimes billions of parameters, like those used for natural language processing for document summarization and question answering, object detection and classification for autonomous vehicles, image classification for large-scale content moderation, recommendation engines for e-commerce websites, and ranking algorithms for intelligent search engines—all of which require increasing network throughput and GPU memory. P4d instances feature 8 NVIDIA A100 Tensor Core GPUs capable of up to 2.5 petaflops of mixed-precision performance and 320 GB of high bandwidth GPU memory in one EC2 instance. P4d instances are the first in the industry to offer 400 Gbps network bandwidth with Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA network interfaces to enable direct communication between GPUs across servers for lower latency and higher scaling efficiency, helping to unblock scaling bottlenecks across multi-node distributed workloads. Each P4d instance also offers 96 Intel Xeon Scalable (Cascade Lake) vCPUs, 1.1 TB of system memory, and 8 TB of local NVMe storage to reduce single node training times. By more than doubling the performance of previous generation of P3 instances, P4d instances can lower the cost to train machine learning models by up to 60%, providing customers greater efficiency over expensive and inflexible on-premises systems. HPC customers will also benefit from P4d’s increased processing performance and GPU memory for demanding workloads like seismic analysis, drug discovery, DNA sequencing, materials science, and financial and insurance risk modeling.

P4d instances are also built on the AWS Nitro System, AWS-designed hardware and software that has enabled AWS to deliver an ever-broadening selection of EC2 instances and configurations to customers, while offering performance that is indistinguishable from bare metal, providing fast storage and networking, and ensuring more secure multi-tenancy. P4d instances offload networking functions to dedicated Nitro Cards that accelerate data transfer between multiple P4d instances. Nitro Cards also enable EFA and GPUDirect, which allows for direct cross-server communication between GPUs, facilitating lower latency and better scaling performance across EC2 UltraClusters of P4d instances. These Nitro-powered capabilities make it possible for customers to launch P4d in EC2 UltraClusters with on-demand and scalable access to over 4,000 GPUs for supercomputer-class performance.

“The pace at which our customers have used AWS services to build, train, and deploy machine learning applications has been extraordinary. At the same time, we have heard from those customers that they want an even lower cost way to train their massive machine learning models,” said Dave Brown, Vice President, EC2, AWS. “Now, with EC2 UltraClusters of P4d instances powered by NVIDIA’s latest A100 GPUs and petabit-scale networking, we’re making supercomputing-class performance available to virtually everyone, while reducing the time to train machine learning models by 3x, and lowering the cost to train by up to 60% compared to previous generation instances.”

Customers can run containerized applications on P4d instances with AWS Deep Learning Containers with libraries for Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS). For a more fully managed experience, customers can use P4d instances via Amazon SageMaker, providing developers and data scientists with the ability to build, train, and deploy machine learning models quickly. HPC customers can leverage AWS Batch and AWS ParallelCluster with P4d instances to help orchestrate jobs and clusters efficiently. P4d instances support all major machine learning frameworks, including TensorFlow, PyTorch, and Apache MXNet, giving customers the flexibility to choose the framework that works best for their applications. P4d instances are available in US East (N. Virginia) and US West (Oregon), with availability planned for additional regions soon. P4d instances can be purchased as On-Demand, with Savings Plans, with Reserved Instances, or as Spot Instances.

GE Healthcare is the $16.7 billion healthcare business of GE. As a leading global medical technology and digital solutions innovator, GE Healthcare enables clinicians to make faster, more informed decisions through intelligent devices, data analytics, applications and services, supported by its Edison intelligence platform. “At GE Healthcare, we provide clinicians with tools that help them aggregate data, apply AI and analytics to that data and uncover insights that improve patient outcomes, drive efficiency and eliminate errors,” said Karley Yoder, VP & GM, Artificial Intelligence, at GE Healthcare. “Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results. Using the new P4d instances reduced processing time from days to hours. We saw two- to three-times greater speed on training models with various image sizes, while achieving better performance with increased batch size and higher productivity with a faster model development cycle.”

Toyota Research Institute (TRI), founded in 2015, is working to develop automated driving, robotics, and other human amplification technology for Toyota. “At TRI, we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

Aon is a leading global professional services firm providing a broad range of risk, retirement and health solutions. Aon PathWise is a GPU-based and scalable HPC risk management solution that insurers and re-insurers, banks, and pension funds can use to address today’s key challenges such as hedge strategy testing, regulatory and economic forecasting, and budgeting. “Aon PathWise allows (re)insurers and pension funds to access next generation technology to rapidly solve today’s key insurance challenges such as hedge strategy testing, regulatory and economic forecasting, and budgeting,” said Peter Phillips, President and CEO, PathWise. “Through the use of AWS P4d instances with 2.5 petaflops of mixed-precision performance, we are able to deliver a two-fold reduction in cost to our customers without loss of performance, and can deliver a 2.5x improvement in speed for the most demanding calculations. Speed matters and we continue to delight our customers thanks to the new instances from AWS.”

Comprised of radiology and AI experts, Rad AI builds products that maximize radiologist productivity, ultimately making healthcare more widely accessible and improving patient outcomes. “At Rad AI, our mission is to increase access to and quality of healthcare, for everyone. With a focus on medical imaging workflow, Rad AI saves radiologists time, reduces burnout, and enhances accuracy,” said Doktor Gurson, Co-founder of Rad AI. “We use AI to automate radiology workflows and help streamline radiology reporting. With the new EC2 P4d instances, we’ve seen faster inference and the ability to train models 2.4x faster, with higher accuracy than on previous generation P3 instances. This allows faster, more accurate diagnosis, and greater access to high quality radiology services provided by our customers across the US.”

OmniSci is a pioneer in accelerated analytics. The OmniSci platform is used in business and government to find insights in data beyond the limits of mainstream analytics tools. “At OmniSci, we’re working to build a future where data science and analytics converge to break down and fuse data silos. Customers are leveraging their massive amounts of data that may include location and time to build a full picture of not only what is happening, but when and where through granular visualization of spatial temporal data. Our technology enables seeing both the forest and the trees,” said Ray Falcione, VP of US Public Sector, at OmniSci. “Through the use of P4d instances, we were able reduce the cost to deploy our platform significantly compared to previous generation GPU instances thus enabling us to cost-effectively scale massive data sets. The networking improvements on A100 has increased our efficiencies in how we scale to billions of rows of data and enabled our customers to glean insights even faster.”

Zenotech Ltd is redefining engineering online through the use of HPC Clouds delivering on demand licensing models together with extreme performance benefits by leveraging GPUs. “At Zenotech we are developing the tools to enable designers to create more efficient and environmentally friendly products. We work across industries and our tools provide greater product performance insight through the use of large scale simulation,” said Jamil Appa, Director and Co-Founder, Zenotech. “The use of P4d instances enables us to reduce our simulation runtime by 65% compared to the previous generation of GPUs. This speed up cuts our time to solve significantly allowing our customers to get designs to market faster or to do higher fidelity simulations than were previously possible.”

About Amazon Web Services

For 14 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 175 fully featured services for compute, storage, databases, networking, analytics, robotics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 77 Availability Zones (AZs) within 24 geographic regions, with announced plans for 12 more Availability Zones and four more AWS Regions in Indonesia, Japan, Spain, and Switzerland. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit amazon.com/about and follow @AmazonNews.


Source: Amazon Web Services

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Rolls Out Certified Server Program Targeting AI Applications

January 26, 2021

Nvidia today launched a certified systems program in which participating vendors can offer Nvidia-certified servers with up to eight A100 GPUs. Separate support contracts directly from Nvidia for the certified systems ar Read more…

By John Russell

XSEDE Supercomputers Square Off Against Ebola

January 26, 2021

COVID-19 may have dominated headlines and occupied much of the world’s scientific computing capacity over the last year, but many researchers continued their work to keep other deadly viruses at bay. One of those, Ebol Read more…

By Oliver Peckham

What’s New in HPC Research: Galaxies, Fugaku, Electron Microscopes & More

January 25, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Red Hat’s Disruption of CentOS Unleashes Storm of Dissent

January 22, 2021

Five weeks after angering much of the CentOS Linux developer community by unveiling controversial changes to the no-cost CentOS operating system, Red Hat has unveiled alternatives for affected users that give them severa Read more…

By Todd R. Weiss

China Unveils First 7nm Chip: Big Island

January 22, 2021

Shanghai Tianshu Zhaoxin Semiconductor Co. is claiming China’s first 7-nanometer chip, described as a leading-edge, general-purpose cloud computing chip based on a proprietary GPU architecture. Dubbed “Big Island Read more…

By George Leopold

AWS Solution Channel

Fire Dynamics Simulation CFD workflow on AWS

Modeling fires is key for many industries, from the design of new buildings, defining evacuation procedures for trains, planes and ships, and even the spread of wildfires. Read more…

HiPEAC Keynote: In-Memory Computing Steps Closer to Practical Reality

January 21, 2021

Pursuit of in-memory computing has long been an active area with recent progress showing promise. Just how in-memory computing works, how close it is to practical application, and what are some of the key opportunities a Read more…

By John Russell

Nvidia Rolls Out Certified Server Program Targeting AI Applications

January 26, 2021

Nvidia today launched a certified systems program in which participating vendors can offer Nvidia-certified servers with up to eight A100 GPUs. Separate support Read more…

By John Russell

Red Hat’s Disruption of CentOS Unleashes Storm of Dissent

January 22, 2021

Five weeks after angering much of the CentOS Linux developer community by unveiling controversial changes to the no-cost CentOS operating system, Red Hat has un Read more…

By Todd R. Weiss

HiPEAC Keynote: In-Memory Computing Steps Closer to Practical Reality

January 21, 2021

Pursuit of in-memory computing has long been an active area with recent progress showing promise. Just how in-memory computing works, how close it is to practic Read more…

By John Russell

HiPEAC’s Vision for a New Cyber Era, a ‘Continuum of Computing’

January 21, 2021

Earlier this week (Jan. 19), HiPEAC — the European Network on High Performance and Embedded Architecture and Compilation — published the 8th edition of the HiPEAC Vision, detailing an increasingly interconnected computing landscape where complex tasks are carried out across multiple... Read more…

By Tiffany Trader

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

President-elect Biden Taps Eric Lander and Deep Team on Science Policy

January 19, 2021

Last Friday U.S. President-elect Joe Biden named The Broad Institute founding director and president Eric Lander as his science advisor and as director of the Office of Science and Technology Policy. Lander, 63, is a mathematician by training and distinguished life sciences... Read more…

By John Russell

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Leading Solution Providers

Contributors

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This