HPCwire Reveals Winners of the 2022 Readers’ and Editors’ Choice Awards During SC22

November 14, 2022

DALLAS, Texas, Nov. 14, 2022 —HPCwire, the leading publication for news and information for the high performance computing industry, announced the winners of the 2022 HPCwire Readers’ and Editors’ Choice Awards at the Supercomputing Conference (SC22) taking place this week in Dallas, Texas. Tom Tabor, CEO of Tabor Communications, unveiled the list of 137 winners across 21 categories just before the opening gala reception.

“The HPCwire Readers’ and Editors’ Choice Awards serve as a pillar of recognition in our community, acknowledging major achievements, outstanding leadership, and innovative breakthroughs,” said Tom Tabor, CEO of Tabor Communications, publisher of HPCwire. “There is undeniable community support signified in receiving this award. Not only from the entire HPC ecosystem, but also from the amplitude of industries it serves. We proudly recognize these brilliant efforts and achievements and gladly allow the voices of our readers to be heard. Our sincere congratulations to all of the leading global institutions who have been recognized with an award.”

HPCwire is excited and honored to shine a spotlight on the companies, use cases, and technologies that our readers and editorial team have selected as this year’s winners,” said Tiffany Trader, Managing Editor, HPCwire. “These technical, scientific, and community achievements are pushing the boundaries of what’s possible and helping solve some of humanity’s most pressing challenges.”

HPCwire has designated two categories of awards: (1) Readers’ Choice, where winners have been elected by HPCwire readers, and (2) Editors’ Choice, where winners have been selected by a panel of HPCwire editors and thought leaders in HPC. The process started with an open nomination process, with voting taking place throughout the month of September.

These awards, now in their 19th year, are widely recognized as being among the most prestigious recognition given by the HPC community to its own each year and are the only awards that open voting to a worldwide audience of end users.

The 2022 HPCwire Readers’ and Editors’ Choice Award winners are:

Best Use of HPC in Life Sciences

Readers’ Choice: Researchers at The University of Texas Austin have developed an enzyme that can break down environment-throttling plastics that typically take centuries to degrade in just a matter of hours to days. The project focuses on polyethylene terephthalate (PET), a significant polymer found in most consumer packaging. It makes up 12% of all global waste. The researchers used a machine learning model to generate novel mutations to a natural enzyme called PETase that allows bacteria to degrade PET plastics. The model predicts which mutations in these enzymes would accomplish the goal of quickly depolymerizing post-consumer waste plastic at low temperatures. The Texas Advanced Computing Center’s (TACC) Maverick2 supercomputer powered deep learning models that helped engineer the plastic-eating enzyme. A patent has been filed for the technology, which could help in future landfill cleanup and greening of high waste-producing industries.

Editors’ Choice: Determining protein structures is critical to many branches of life sciences research. AlphaFold, an AI system developed by Google’s DeepMind, predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiments. Since being introduced one year ago, AlphaFold has computationally determined more than 200 million protein structures, covering nearly every protein known. AlphaFold is able to solve a protein’s complex structure based only on its amino acid sequence. DeepMind and EMBL’s European Bioinformatics Institute (EMBL-EBI) have partnered to create AlphaFold DB to make these predictions freely available to the scientific community. About 500,000 researchers have used the tool so far since its introduction.

Best Use of HPC in Physical Science

Readers’ Choice: The Event Horizon Telescope Collaboration teamed with the Texas Advanced Computing Center (TACC) to process an enormous amount of data to produce the first image of the supermassive black hole Sagittarius A*, the supermassive black hole at the center of the Milky Way. The image was made possible through a data-driven approach to astronomy that combines observations from eight radio telescopes around the world to form an Earth-scale interferometer, the Event Horizon Telescope, or EHT, specifically to probe black holes. The data analysis work was completed on TACC’s Frontera supercomputer, a Dell-Intel system.

Editors’ Choice (TIE): NCAR has developed a GPU-based model, FastEddy, that can run weather forecasts at a resolution of just 5 meters (16 feet). FastEddy is a new resident-GPU microscale large-eddy simulation (LES) model coupled with the Weather Research and Forecasting (WRF) model. Trained primarily on NCAR’s Casper system, housing 64 Nvidia V100 GPUs, FastEddy has the capability to provide real-time weather hazard avoidance at the microscale level. FastEddy allows scientists to predict how weather and buildings in an urban environment affect drones and other small aerial vehicles. There are additional development efforts that aim to scale execution across up to 12,288 Nvidia V100 GPUs of the DOE’s Oak Ridge Leadership Computing Facility’s Summit architecture.

Purdue University researchers’ simulations with XSEDE systems at the Pittsburgh Supercomputing Center, San Diego Supercomputer Center, and Texas Advanced Computing Center reproduced sound waves to manage heat and stress in fluid flow. In a two-phase simulation, scientists used Bridges-2 at PSC (built by HPE) and then Comet at SDSC and Stampede2 at TACC (both Dell systems) to build and then run, respectively, a massive simulation showing how sound waves can be used to control and tune friction between the fluid and the walls of a container and transfer heat. The work holds promise in improving efficiency and reducing stress in power plants, electronics, and off-shore structures, lengthening service life and thus reducing production costs.

Best HPC Response To Societal Plight (Urgent Computing, Covid-19)

Readers’ Choice: When Paxlovid was still being vetted, a team of researchers from the University of Valencia applied supercomputing power to determine how, exactly, the drug inhibits SARS-CoV-2. Paxlovid works by binding to SARS-CoV-2’s 3CL protease (3CLpro), an enzyme that serves as a crucial piece of the virus’ replication process. The researchers used a computational hybrid methodology that combines classical molecular dynamics with quantum mechanics. They ran those simulations on the MareNostrum 4 supercomputer (built by Lenovo leveraging Intel technologies) at the Barcelona Supercomputing Center. The simulations illuminated exactly why Paxlovid is so effective in debilitating SARS-CoV-2. And the researchers were able to produce a visualization of Paxlovid inhibiting the virus.

Editors’ Choice (TIE): Researchers from the University of California Riverside (UCR), using supercomputing power at the San Diego Supercomputer Center, investigated methods of removing nonbiodegradable “forever chemicals” like perfluoroalkyl and polyfluoroalkyl substances (collectively, PFASs) from drinking water – one of their primary methods of ingress to the human body. The researchers used simulations to explore the ability of light to degrade these chemicals. To run the simulations, they used the San Diego Supercomputer Center’s Comet supercomputer, a 2.76-peak petaflops system, supplied by Dell. They found that bombarding virtual PFASs with virtual light dissolved the PFASs into virtual water molecules. Understanding the process at a quantum-mechanical level will help design ways to treat PFASs in the future.

Researchers at the San Diego Supercomputer Center and UC San Diego, in partnership with San Diego County’s Health and Human Services Agency (HHSA), developed computational tools that help the county plan for Covid-safe school operations. At the heart of the effort is the Geographically assisted Agent-based model for COVID-19 Transmission (GeoACT), which was designed for use on the center’s Comet and Expanse supercomputers (both built by Dell). The simulations run by the group using the model allow researchers to pinpoint areas in schools that would present higher COVID-19 transmission risks and evaluate the relative importance of non-pharmaceutical interventions, such as wearing masks, reducing class sizes, or moving lunch from the cafeteria into classrooms.

Best Use of HPC in Energy

Readers’ Choice: A first-year graduate student in chemistry at UC San Diego, working with an international team of researchers, used the Comet system at the San Diego Supercomputer Center to explore methane storage applications. They validated the use of computational chemistry as a tool to design new porous carbon materials for methane storage applications – a key bridging technology to reduced-carbon or carbon-free chemical fuels for vehicles.

Editors’ Choice: Lawrence Livermore National Laboratory researchers applied cognitive simulation, an approach that combines HPC and machine learning, to 100 million inertial confinement fusion (ICF) simulations. Making use of large-scale HPC machines, including the lab’s IBM-built Lassen system and a deep learning tool called Merlin, the work allowed scientists to enhance the predictive models of ICF implosions and improve the way experiments are performed at the National Ignition Facility and other laser facilities.

Best Use of HPC in Industry (Automotive, Aerospace, Manufacturing, Chemical, Etc.)

Readers’ Choice: Argonne National Laboratory and the Raytheon Technologies Research Center developed physics-guided machine learning surrogate models to enable accelerated simulation-driven design and optimization of high-efficiency gas turbine thermal management systems. The HPC-based framework coupled machine learning surrogate models with computational fluid dynamics (CFD) solvers. The work used Argonne National Laboratory’s massively parallel high-order spectral element CFD solver Nek5000 and machine learning frameworks and software tools. The integrated simulation-AI framework developed in this collaborative effort can help extend the fuel efficiency and durability limits of next-generation aircraft engines while slashing design times and costs.

Editors’ Choice: Aramco Americas, Argonne National Laboratory, and Convergent Science used HPC-accelerated computational fluid dynamics simulations to help develop zero-carbon, ultra-high-efficiency hydrogen propulsion systems. Pressure to decarbonize the transport sector motivated the research that developed a high-fidelity, HPC-enabled analysis-led design (ALD) process to accelerate the advancement of such systems. Using the Argonne Leadership Computing Facility (ALCF), the developed ALD process will expedite the adoption of clean, highly efficient hydrogen propulsion systems, enabling an accelerated transition to a clean, low-carbon energy system.

Best Use of High Performance Data Analytics & Artificial Intelligence

Readers’ Choice: Scientists from Argonne National Laboratory, the University of Chicago, National Center for Supercomputing Applications and the University of Illinois at Urbana-Champaign introduced a novel set of practical, concise, and measurable FAIR (Findable, Accessible, Interoperable, Reusable) principles for AI models. They showcased their approach with a domain-agnostic computational framework that brings together the Advanced Photon Source at Argonne, the Materials Data Facility, the Data and Learning Hub for Science, funcX, Globus, the ThetaGPU supercomputer, and the SambaNova DataScale system at the Argonne Leadership Computing Facility. They combined Nvidia A100 GPUs, Nvidia TensorRT, Docker, Apptainer (formerly Singularity), and the SambaNova DataScale system to demonstrate the use of AI surrogates to enable accelerated and FAIR AI-driven discovery for high-energy diffraction microscopy. The work presents a domain-agnostic computational framework to enable autonomous AI-driven discovery at scale and is showcased in the context of accelerated high-energy diffraction microscopy.

Editors’ Choice: Researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill are using XSEDE-allocated Bridges-2 at the Pittsburgh Supercomputing Center and Frontera at the Texas Advanced Computing Center – built by HPE and Dell respectively – to develop machine-learning-driven robotic production of MRI contrast agents. The CMU team created an “artificial chemist” that mimicked the expertise of human chemists, which in turn directed a robotic lab instrument at UNC to synthesize improved contrast agents for medical MRI imaging without human supervision. The algorithm narrowed a potential 50,000 polymers to a short list that, in laboratory tests, performed as much as 50 percent better than current MRI contrast agents, offering a path toward improving the sensitivity and specificity of MRI images.

Best Use Of HPC in Financial Services

Readers’ Choice: BNP Paribas teamed with atNorth and Dell Technologies to responsibly expand and future-proof its HPC infrastructure by moving some of its HPC server farms to atNorth’s site in Iceland. The cooler climate and abundance of renewable energy, combined with the more power-efficient HPC systems, helped BNP reduce its energy usage by 50%, decrease its CO2 output by 85%, and transition the bank to using 100% renewable energy.

Editors’ Choice: Riskfuel, a startup developing fast derivatives models based on artificial intelligence, is using Microsoft Azure GPU-based virtual machine instances to accelerate its workloads. Running its Riskfuel-accelerated model on Azure ND40rs_v2 (NDv2-Series) virtual machine instance, there was a more than 28,000,000x improvement versus traditional CPU-driven methods.

Best AI Product or Technology

Readers’ Choice: Nvidia A100 GPU

Editors’ Choice: Cerebras Systems CS-2 Artificial Intelligence system

Best Use of HPC in the Cloud (Use Case)

Readers’ Choice: The IceCube Neutrino Observatory is a gigaton-scale neutrino detector located at the South Pole. In a collaboration between the San Diego Supercomputer Center and the Wisconsin IceCube Particle Astrophysics Center at the University of Wisconsin–Madison, the team used Google Cloud, Google Kubernetes Engine, and GPU sharing with Nvidia GPUs in Google Kubernetes Engine to expand the Open Science Grid to help detect neutrinos at the South Pole. Sharing increased job throughput by about 40%.

Editors’ Choice: By migrating EDA workloads to AWS, Arm can now run more than 53 million jobs per week and up to 9 million jobs per day utilizing more than 25,800 Amazon EC2 instances, thereby optimizing compute costs, resulting in increased engineering productivity and accelerated speed to market for its users.

Best HPC Cloud Platform

Readers’ Choice: Amazon Web Services (AWS)

Editors’ Choice: Google Cloud

Best HPC Server Product or Technology

Readers’ Choice: Nvidia H100 GPU

Editors’ Choice: AMD 3rd generation Epyc “Milan-X” processors with AMD 3D V-Cache technology

Best HPC Storage Product or Technology

Readers’ Choice: BeeGFS

Editors’ Choice (TIE):

VAST Data’s Universal Storage Data Platform

DDN EXAScaler 6

Best HPC Programming Tool or Technology

Readers’ Choice: Python

Editors’ Choice: Open source oneAPI

Best HPC Interconnect Product or Technology

Readers’ Choice: Nvidia Quantum-2 400Gb/s InfiniBand Networking Platform

Editors’ Choice: Compute Express Link (CXL)

Best HPC Collaboration (Academia/Government/Industry)

Readers’ Choice: Using NASA’s James Webb Space Telescope, scientists with the Cosmic Evolution Early Release Science (CEERS) Collaboration are using supercomputers to analyze how some of the earliest galaxies formed. They have identified an object that may be one of the earliest galaxies ever observed using supercomputers at the Texas Advanced Computing Center for image processing.

Editors’ Choice (TIE): The RISC2 project, following the RISC Project, aims to promote and improve the relationship between research and industrial communities, focusing on HPC applications and infrastructure deployment, between Europe and Latin America. Led by the Barcelona Supercomputing Center (BSC), RISC2 brings together 16 partners from 12 countries.

Bloom is a global collaborative effort to develop the largest, open and multilingual NLP model in the world. Bloom is the result of BigScience, a collaborative effort of more than 1000 global researchers from academia, startups, large companies, HPC centers, Nvidia, Microsoft, and more. The effort was orchestrated by the HuggingFace startup. Researchers used the HPE-built Jean Zay supercomputer, GENCI, at The Institute for Development and Resources in Intensive Scientific Computing (IDRIS). Having such models open and trained on public research infrastructure is very important and in some fields sovereign for many uses.

Best Sustainability Innovation in HPC

Readers’ Choice: The Frontier supercomputer, a collaboration of HPE, AMD, DOE, and ORNL, is both the fastest ranked supercomputer in the world (at 1.102 Linpack exaflops) and the second-most energy-efficient (at 52.23 gigaflops per watt). It is outranked only on the Green500 list by its own test and development system (Frontier-TDS), which achieved first place on the list with 62.68 gigaflops per watt. These achievements of the Frontier supercomputers show that energy efficiency does not have to come at the cost of best-in-class performance.

Editors’ Choice: HPE and AMD top four: One recent trend is that many of the most powerful supercomputers in the world are also among the most energy efficient. To that end, HPE and AMD supplied the top four most energy-efficient systems in the world (DOE/ORNL’s Frontier-TDS and Frontier; EuroHPC/CSC’s LUMI; and GENCI’s Adastra), with three of those systems also ranking in the top ten most powerful supercomputers.

Top Supercomputing Achievement

Readers’ and Editors’ Choice: Oak Ridge National Laboratory (ORNL) Frontier Supercomputer: A collaboration of HPE, AMD, DOE, and ORNL resulted in the first verified exascale supercomputer. According to the Top500 list, Frontier is more powerful than the next top seven of the world’s largest supercomputers combined. It will allow scientists to solve problems that are 8X more complex up to 10X faster, speeding up discoveries and putting more power behind those working to solve the world’s toughest challenges. At an exascale speed, Frontier’s users can develop AI models that are 4.5X faster and 8X larger, allowing them to train more data that can increase predictability and speed time-to-discovery.

Top 5 New Products or Technologies To Watch

Readers’ Choice:

Nvidia H100 GPU

Nvidia NDR 400Gb/s InfiniBand

Rocky Linux

Intel 4th generation Xeon processors, codenamed Sapphire Rapids

AMD Instinct MI200 series accelerators

Editors’ Choice:

Nvidia Grace Superchip CPU

AMD 4th generation Epyc “Genoa” processors

Cornelis Networks Omni-Path Express

Google Cloud HPC Toolkit

AWS Graviton3 Arm-based processors

Top 5 Vendors To Watch

Readers’ Choice:





Amazon Web Services (AWS)

Editors’ Choice:






Workforce Diversity & Inclusion Leadership Award

Readers’ Choice (TIE):

Through collaboration and networking, Women in HPC (WHPC) strives to bring together women in HPC and technical computing, encouraging them to engage in outreach activities and improve the visibility of inspirational role models.

STEM-Trek is a global, grassroots nonprofit organization that promotes research computing and data science (RCD) workforce development. Scholars from regions and demographics that are underrepresented in their use of performance technologies are supported to attend conferences and workshops. The infinity sign in the STEM-Trek logo represents the symbiotic relationship between early, mid and late-career professionals in a range of RCD specializations. Beneficiaries are encouraged to pay-it-forward by mentoring, hosting workshops, blogging for the STEM-Trek website, and more.

Editors’ Choice (TIE): The Centre for High Performance Computing (CHPC), one of three primary pillars of the national cyber-infrastructure intervention supported by the South African Department of Science and Innovation (DSI), is transforming the face of HPC through its work in bringing HPC to a wider South African audience through its HPC Ecosystems project.

The Center for Artificial Intelligence Innovation at the National Center for Supercomputing Applications’ “Future of Discovery: Training Students to Build and Apply Open Source Machine Learning Models and Tools” is an NSF-funded summer REU program that provides a 10-week training experience at the University of Illinois Urbana-Champaign. The REU-FoDOMMat program works with minority serving institutions such as Historically Black Colleges and Universities, Tribal Colleges and Universities, and other academic institutions that don’t have access to advanced digital resources to provide research opportunities and hands-on experience with machine learning and open-source software methodologies to undergrads. In addition, the program covers students’ housing, food, and travel expenses and provides a weekly stipend. Eleven students, including five female participants, took part in the program in Summer 2022, receiving valuable training in developing AI models with PyTorch on HAL (an NCSA cluster) and gaining experience in developing models and open-source software while working on faculty-led projects.

Outstanding Leadership in HPC

Readers’ Choice (TIE):

Dan Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) and the Associate Vice President for Research at the University of Texas at Austin. As the leader of a world-class academic supercomputing center, Stanzione has led with integrity, an unequaled analytical mind, and a clear vision of where HPC and computational science are heading. TACC operates one of the world’s largest academic supercomputers, Frontera, which serves 5,000 users at 519 institutions representing every state in the country. TACC is also planning a future system, Horizon, for the forthcoming Leadership-Class Computing Facility.

John Towns (UIUC/NCSA) is the Principal Investigator of the National Science Foundation’s XSEDE program, which just concluded its successful 11-year operations. From 2010 to 2022, Towns oversaw a successful enterprise that not only enabled countless scientific discoveries across an array of domains but also developed workforces and created inclusive communities, including those that have been historically underserved; Towns’ dedication and leadership resulted in him and XSEDE being tapped by the Office of Science and Technology Policy at the White House to manage the scientific review process for the COVID-19 HPC Consortium, allowing researchers to explore antibody treatments, vaccines, RNA virus treatment, identification techniques, how the virus travels and more.

Editors’ Choice (TIE):

Barbara “Barb” Helland is the Associate Director of the Advanced Scientific Computing Research (ASCR) program in the U.S. Department of Energy’s Office of Science, and she also leads the Department’s Exascale Computing Initiative. Under her leadership, the DOE stood up the United States’ first exascale supercomputer, Frontier, in May 2022. Helland previously served as director of ASCR’s Facilities Division, and program manager for ASCR’s Argonne and Oak Ridge Leadership Computing Facilities and the National Energy Research Scientific Computing Center.

Susan K. Gregurick is Associate Director for Data Science and Director of the Office of Data Science Strategy (ODSS) at the National Institutes of Health (NIH). Under her leadership, ODSS leads the implementation of the NIH Strategic Plan for Data Science through scientific, technical, and operational collaboration with the institutes, centers, and offices that comprise NIH. Among other notable accomplishments, Dr. Gregurick was instrumental in guiding NIH in its data strategy during the global collection and analysis of COVID-19 data throughout the pandemic, including the utilization of multiple HPC resources (both on-prem and cloud). Dr. Gregurick received the 2020 Leadership in Biological Sciences Award from the Washington Academy of Sciences for her work in this role.

More information on these awards can be found at the HPCwire website at https://www.hpcwire.com/2022-hpcwire-awards-readers-editors-choice and on Twitter through the hashtag: #HPCwireAwards.

About HPCwire

HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy of world-class editorial and journalism dating back to 1987, HPCwire is the news source of choice for science, technology and business professionals interested in high performance and data-intensive computing. Visit HPCwire at www.hpcwire.com.

About Tabor Communications Inc.

Tabor Communications Inc. (TCI) is a media and services company dedicated to high-end, performance computing. As publisher of a complete advanced scale computing portfolio that includes HPCwireDatanamiEnterpriseAIand HPCwire Japan, TCI is the market-leader in online journalism covering emerging technologies within the high-tech industry, and a services company providing events, audience insights, and other services for companies engaged in performance computing in enterprise, government, and research. More information can be found at www.taborcommunications.com.

Source: Tabor Communications, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Can Cerabyte Crack the $1-Per-Petabyte Barrier with Ceramic Storage?

July 20, 2024

A German startup named Cerabyte is hoping to solve the burgeoning market for secondary and archival data storage with a novel approach that uses lasers to etch bits onto glass with a ceramic coating. The “grey ceramic� Read more…

Weekly Wire Roundup: July 15-July 19, 2024

July 19, 2024

It's summertime (for most of us), and the HPC-related headlines aren't as plentiful as they once were. But not everything has to happen at high tide-- this week still had some waves! Idaho National Laboratory's Bitter Read more…

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Can Cerabyte Crack the $1-Per-Petabyte Barrier with Ceramic Storage?

July 20, 2024

A German startup named Cerabyte is hoping to solve the burgeoning market for secondary and archival data storage with a novel approach that uses lasers to etch Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…


Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers


Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

  • arrow
  • Click Here for More Headlines
  • arrow