AWS Announces Three New Amazon EC2 Instances Powered by AWS-Designed Chips

December 1, 2021

LAS VEGAS, Dec. 1, 2021 — Tuesday, at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips that help customers significantly improve the performance, cost, and energy efficiency of their workloads running on Amazon EC2. New C7g instances powered by next-generation AWS Graviton3 processors deliver up to up to 25% better performance than current generation C6g instances powered by AWS Graviton2 processors. New Trn1 instances powered by AWS Trainium chips provide the best price performance and the fastest time to train most machine learning models in Amazon EC2. New storage-optimized Im4gn/Is4gen/I4i instances based on AWS-designed AWS Nitro SSDs (solid-state drives) offer the best storage performance for I/O-intensive workloads running on Amazon EC2. Together, these instances herald the arrival of new Amazon EC2 instances based on AWS-designed chips that help customers power their most business-critical applications.

“With our investments in AWS-designed chips, customers have realized huge price performance benefits for some of today’s most business-critical workloads. These customers have asked us to continue pushing the envelope with each new EC2 instance generation,” said David Brown, Vice President, Amazon EC2 at AWS. “AWS’s continued innovation means customers are now getting brand new, game changing instances to run their most important workloads with significantly better price performance than anywhere else.”

C7g instances powered by new AWS Graviton3 processors deliver up to 25% better performance compared to current generation C6g instances powered by AWS Graviton2 processors

Customers like DirecTV, Discovery, Epic Games, Formula 1, Honeycomb.io, Intuit, Lyft, MercardoLibre, NextRoll, Nielsen, SmugMug, Snap, Splunk, and Sprinklr have seen significant performance gains and reduced costs from running AWS Graviton2-based instances in production since they launched in 2020. The Graviton2 instance portfolio offers 12 different instances that include general purpose, compute optimized, memory optimized, storage optimized, burstable, and accelerated computing instances, so customers have the deepest and broadest choice of cost-effective and power-efficient compute in the cloud. As customers bring more compute intensive workloads like high performance computing (HPC), gaming, and machine learning inference to the cloud, and as their compute, storage, memory, and networking demands grow, they are looking for even better price performance and energy efficiency to run these demanding workloads.

C7g instances, powered by next generation AWS Graviton3 processors, provide up to 25% better performance for compute-intensive workloads compared to current generation C6g instances powered by Graviton2 processors. AWS Graviton3 processors also deliver up to 2x higher floating point performance for scientific, machine learning, and media encoding workloads, up to 2x faster performance for cryptographic workloads, and up to 3x better performance for machine learning workloads compared to previous generation AWS Graviton2 processors. AWS Graviton3 processors are also more energy efficient, using up to 60% less energy for same performance than comparable EC2 instances. C7g instances are the first in the cloud to feature the latest DDR5 memory, which provides 50% higher memory bandwidth versus AWS Graviton2-based instances to improve the performance of memory-intensive applications like scientific computing. C7g instances also deliver 20% higher networking bandwidth compared to AWS Graviton2-based instances. C7g instances support Elastic Fabric Adapter (EFA), which allows applications to communicate directly with network interface cards, providing lower and more consistent latency, to enhance the performance of applications that require parallel processing at scale like HPC and video encoding. C7g instances are available today in preview. To learn more about C7g instances, visit aws.amazon.com/ec2/instance-types/c7g.

Trn1 instances powered by AWS Trainium chips provide the best price performance and the fastest time to train most machine learning models in Amazon EC2

More and more customers are building, training, and deploying machine learning models to power applications that have the potential to reinvent their businesses and customer experiences. However, to ensure improved accuracy, these machine learning models must consume ever-growing amounts of training data, which causes them to become increasingly expensive to train. This dilemma can have the effect of limiting the number of machine learning models that customers are able to deploy. AWS provides the broadest and deepest choice of compute offerings for machine learning, including the EC2 P4d instances featuring NVIDIA A100 Tensor Core GPUs and EC2 DL1 instances featuring Gaudi accelerators from Habana Labs. But even with the fastest accelerated instances available today, it can still be prohibitively expensive and time consuming to train ever-larger machine learning models.

Trn1 instances powered by AWS Trainium chips offer the best price performance and the fastest machine learning model training in Amazon EC2, providing up to 40% lower cost to train deep learning models compared to the latest P4d instances. Trn1 instances offer 800 Gbps EFA networking bandwidth (2x higher than the latest EC2 GPU-based instances) and integrate with Amazon FSx for Lustre high performance storage—enabling customers to launch Trn1 instances with EC2 UltraClusters capability. With EC2 UltraClusters, developers can scale machine learning training to 10,000+ Trainium accelerators interconnected with petabit-scale networking, giving customers on-demand access to supercomputing-class performance to cut training time from months to days for even the largest and most complex models. Trn1 instances are available today in preview. To learn more about Trn1 instances, visit aws.amazon.com/ec2/instance-types/trn1.

Im4gn/Is4gen/I4i instances featuring new AWS Nitro SSDs deliver the best storage performance for I/O intensive-workloads

Today, customers use I3/I3en storage-optimized instances for applications that require direct access to data sets on local storage like scale-out transactional and relational databases (e.g. MySQL and PostgreSQL), NoSQL databases (e.g. Cassandra, MongoDB, Redis, etc.), big data (e.g. Hadoop), and data analytics workloads (e.g. Spark, Hive, Presto, etc.). I3/I3en instances offer Non-Volatile Memory Express (NVMe) SSD-backed instance storage optimized for low latency, high I/O performance, and throughput at a low cost. Customers appreciate the fast transaction times I3/I3en instances provide, but as they evolve their workloads to process even more complex transactions on larger data sets, they need even higher compute performance and faster access to data, without higher costs.

Im4gn/Is4gen/I4i instances are architected to maximize the storage performance of I/O-intensive workloads. Im4gn/Is4gen/I4i instances offer up to 30 TB of NVMe storage from AWS-designed AWS Nitro SSDs, delivering up to 60% lower I/O latency and 75% lower latency variability compared to previous generation I3 instances to maximize application performance. AWS Nitro SSDs are tightly integrated with the AWS Nitro System via optimizations in the storage stack, hypervisor, and hardware. Because AWS is managing both the hardware and firmware of the AWS Nitro SSDs, customers benefit from improved functionality because SSD updates are delivered more quickly compared to using commercial SSDs. Im4gn instances (available today) feature AWS Graviton2 processors and provide up to 40% better price performance and up to 44% lower cost per TB of storage compared to I3 instances. Is4gen instances (available today) also use AWS Graviton2 processors and provide up to 15% lower cost per TB of storage and up to 48% better compute performance compared to I3en instances. To get started with Im4gn/Is4gen instances, visit aws.amazon.com/ec2/instance-types/i4g. I4i instances (available soon) feature 3rd generation Intel Scalable processors (Ice Lake), delivering up to 55% better compute performance than current generation I3 instances. To learn more about I4i instances, visit aws.amazon.com/ec2/instance-types/i4i.

SAP HANA is a world’s leading in-memory database that serves as the foundation of the SAP Business Technology Platform. “Over the past decade, SAP HANA has helped customers manage their most mission critical transactional and analytics workloads,” said Irfan Khan, President of HANA Database & Analytics at SAP. “AWS investments and innovations on ARM-based AWS Graviton processors and SAP HANA Cloud are a great match with potential to deliver step-wise operation and performance improvement benefits to our enterprise customers, and to SAP’s cloud analytics and data management solutions powered by SAP HANA Cloud.”

Twitter is what’s happening and what people are talking about right now. “Twitter is working on a multi-year project to leverage the AWS Graviton-based EC2 instances to deliver Twitter timelines. As part of our ongoing engineering to drive further efficiencies, we tested the new Graviton3-based C7g instances,” said Nick Tornow, Head of Platform at Twitter. “Across a number of benchmarks that we’ve found to be representative of the performance of Twitter workloads, we found Graviton3-based C7g instances deliver 20%-80% higher performance versus Graviton2-based C6g instances, while also reducing tail latencies by as much as 35%. We are excited to utilize Graviton3-based instances in the future to realize significant price performance benefits.”

Formula 1 (F1) racing began in 1950 and is the world’s most prestigious motor racing competition, as well as the world’s most popular annual sporting series. “We had already seen that Graviton2-based C6gn instances provided us the best price performance for some of our CFD workloads. We have now found Graviton3 C7g instances to be 40% faster than the Graviton2 C6gn instances for those same simulations,” said Pat Symonds, CTO at Formula 1 Management. “We’re excited that EFA will be standard on this instance type, and given this much improved price performance, we expect Graviton3-based instances to become the optimal choice to run all of our CFD workloads.”

Founded in 1991, Epic Games is the creator of Fortnite, Unreal, Gears of War, Shadow Complex, and the Infinity Blade series of games. Epic’s Unreal Engine technology brings high-fidelity, interactive experiences to PC, console, mobile, AR, VR, and the Web. “As we look to the future and building increasingly immersive and compelling experiences for players, we are excited to use AWS Graviton3-based EC2 instances,” said Mark Imbriaco, Senior Director of Engineering at Epic Games. “Our testing has shown they are suitable for even the most demanding, latency-sensitive workloads while providing significant price performance benefits and expanding what is possible within Fortnite and any Unreal Engine created experience.”

Honeycomb develops an observability platform that enables engineering teams to visualize, analyze, and improve cloud application quality and performance. “We’re excited to have tested our high-throughput telemetry ingestion workload against early preview instances of AWS Graviton3 and have seen a 35% performance increase for our workload over Graviton2,” said Liz Fong-Jones, Principal Developer Advocate at honeycomb.io. “We were able to run 30% fewer instances of C7g than C6g serving the same workload, and with 30% reduced latency. We are looking forward to adopting AWS Graviton3-powered C7g instances in production once they are generally available.”

Anthropic builds reliable, interpretable, and steerable AI systems that will have many opportunities to create value commercially and for public benefit. “Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability. A major key to our success is access to modern infrastructure that allows us to spin up very large fleets of high-performance deep learning accelerators,” said Tom Brown, Co-founder at Anthropic. “We are looking forward to using Trn1 instances powered by AWS Trainium, as their unprecedented ability to scale to tens of thousands of nodes and higher network bandwidth will enable us to iterate faster while keeping our costs under control.”

Splunk is a leading data platform provider and is designed to investigate, monitor, analyze, and act on data at any scale. “We run C/C++ based workloads for indexing and searching event data. Our workload is CPU bound and benefits from high capacity and low latency SSD storage,” said Brad Murphy, Vice President, Cloud Platform & Infrastructure at Splunk. “When evaluating the new Im4gn/Is4gen instances powered by AWS Graviton2, we observed an up to 50% decrease in search runtime compared to I3/I3en instances, which we currently use. This makes Im4gn and Is4gen instances a great choice for running our storage-intensive workloads with significant price performance improvement and lower TCO.”

Sprinklr helps the world’s biggest companies make their customers happier across 30+ digital channels—using the most advanced, sophisticated AI engine built for the enterprise to create insight-driven strategies and better customer experiences. “We benchmarked our Java-based search workloads on Amazon EC2 Im4gn/Is4gen instances powered by AWS Graviton2 processors. Smaller Is4gen instances offer similar performance compared to larger I3en instances, presenting an opportunity to meaningfully reduce the TCO,” said Abhay Bansal, Vice President of Engineering at Sprinklr. “We also saw a significant 50% reduction in latency for queries when moving our workloads from I3 to Im4gn instances, indicating a significant 40% price performance benefit. Moving to AWS Graviton2-based instances was easy, taking two weeks to complete benchmarking. We are very happy with our experience and look forward to running these workloads in production on Im4gn and Is4gen instances.”

Redis Enterprise powers mission critical apps and services for over 8,000 organizations globally by enabling software teams to create a high-performance data layer for the real-time world. “We’re thrilled to see the Amazon EC2 I4i instances using the new low latency AWS Nitro SSDs that deliver better transaction speed than the previous generation instances,” said Yiftach Shoolman, Co-Founder and CTO at Redis. “We expect the faster storage performance and higher networking and processor speeds of the I4i instances will deliver significant improvements at an even more attractive total cost of ownership for our customers who use Redis-on-Flash on I4i instances.”

About Amazon Web Services

For over 15 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud offering. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 81 Availability Zones within 25 geographic regions, with announced plans for 27 more Availability Zones and nine more AWS Regions in Australia, Canada, India, Indonesia, Israel, New Zealand, Spain, Switzerland, and the United Arab Emirates. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out technology, Amazon Studios, and The Climate Pledge are some of the things pioneered by Amazon. For more information, visit amazon.com/about.


Source: AWS

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win

June 22, 2022

Cerebras Systems makes the largest chip in the world, but is already thinking about its upcoming AI chips as learning models continue to grow at breakneck speed. The company’s latest Wafer Scale Engine chip is indeed the size of a wafer, and is made using TSMC’s 7nm process. The next chip will pack in more cores to handle the fast-growing compute needs of AI, said Andrew Feldman, CEO of Cerebras Systems. Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Quinn in a presentation delivered to the 79th HPC User Forum Read more…

IDC Perspective on Integration of Quantum Computing and HPC

June 20, 2022

The insatiable need to compress time to insights from massive and complex datasets is fueling the demand for quantum computing integration into high performance computing (HPC) environments. Such an integration would allow enterprises to accelerate and optimize current HPC applications and processes by simulating and emulating them on today’s noisy... Read more…

Q&A with Intel’s Jeff McVeigh, an HPCwire Person to Watch in 2022

June 17, 2022

HPCwire presents our interview with Jeff McVeigh, vice president and general manager, Super Compute Group, Intel Corporation, and an HPCwire 2022 Person to Watch. McVeigh shares Intel's plans for the year ahead, his pers Read more…

AWS Solution Channel

Shutterstock 152995403

Bayesian ML Models at Scale with AWS Batch

This post was contributed by Ampersand’s Jeffrey Enos, Senior Machine Learning Engineer, Daniel Gerlanc, Senior Director for Data Science, and Brandon Willard, Data Science Lead. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 261863138

Using Cloud-Based, GPU-Accelerated AI for Financial Risk Management

There are strict rules governing financial institutions with a number of global regulatory groups publishing financial compliance requirements. Financial institutions face many challenges and legal responsibilities for risk management, compliance violations, and failure to catch financial fraud. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Intel CPUs and GPUs across multiple partitions. The newly reimag Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win

June 22, 2022

Cerebras Systems makes the largest chip in the world, but is already thinking about its upcoming AI chips as learning models continue to grow at breakneck speed. The company’s latest Wafer Scale Engine chip is indeed the size of a wafer, and is made using TSMC’s 7nm process. The next chip will pack in more cores to handle the fast-growing compute needs of AI, said Andrew Feldman, CEO of Cerebras Systems. Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

IDC Perspective on Integration of Quantum Computing and HPC

June 20, 2022

The insatiable need to compress time to insights from massive and complex datasets is fueling the demand for quantum computing integration into high performance computing (HPC) environments. Such an integration would allow enterprises to accelerate and optimize current HPC applications and processes by simulating and emulating them on today’s noisy... Read more…

Q&A with Intel’s Jeff McVeigh, an HPCwire Person to Watch in 2022

June 17, 2022

HPCwire presents our interview with Jeff McVeigh, vice president and general manager, Super Compute Group, Intel Corporation, and an HPCwire 2022 Person to Watc Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

D-Wave Debuts Advantage2 Prototype; Seeks User Exploration and Feedback

June 16, 2022

Starting today, D-Wave Systems is providing access to a 500-plus-qubit prototype of its forthcoming 7000-qubit Advantage2 quantum annealing computer, which is d Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

ISC 2022 Booth Video Tours

AMD
AWS
DDN
Dell
Intel
Lenovo
Microsoft
PENGUIN SOLUTIONS

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

Covid Policies at HPC Conferences Should Reflect HPC Research

June 6, 2022

Supercomputing has been indispensable throughout the Covid-19 pandemic, from modeling the virus and its spread to designing vaccines and therapeutics. But, desp Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire