ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

Microsoft’s ‘Singularity’ to Enable Global Accelerator Network for AI Training

February 24, 2022

In science fiction and future studies, the word “singularity” is invoked in reference to a rapidly snowballing artificial intelligence that, repeatedly iterating on itself, eclipses all human knowledge and ability. It is this word that Microsoft—perhaps ambitiously—has invoked for its new AI project, a “globally distributed scheduling service for highly efficient and reliable execution of deep learning training and inference workloads.” Read more…

What’s New in HPC Research: Pollution, Dark Data, Human Brains & More

July 20, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming Read more…

Using XSEDE Allocation, Researchers Develop Neural Network to Predict DNA Methylation Sites

August 19, 2020

Through methylation, the behavior of DNA changes, but its overall structure remains the same. This process is central to many normal, essential processes, but e Read more…

Heterogeneous Computing Gets a Code Similarity Tool

July 31, 2020

A machine programming framework for heterogeneous computing championed by Intel Corp. and university partners is built around an automated engine that analyzes Read more…

Army Seeks AI Ground Truth

April 3, 2020

Deep neural networks are being mustered by U.S. military researchers to marshal new technology forces on the Internet of Battlefield Things. U.S. Army and industry researchers said this week they have developed a “confidence metric” for assessing the reliability of AI and machine learning algorithms used in deep neural networks. The metric seeks to boost... Read more…

Micron Accelerator Bumps Up Memory Bandwidth

February 26, 2020

Deep learning accelerators based on chip architectures coupled with high-bandwidth memory are emerging to enable near real-time processing of machine learning a Read more…

ML Experts Confront Reproducibility Claims

March 13, 2019

Machine learning researchers are pushing back on the recent assertion that the AI framework is a key contributor to a reproducibility crisis in scientific research. Rick Stevens, associate laboratory director for computing, environment and life sciences at Argonne National Laboratory... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

Penguin Computing Scyld Cloud Central™: A New Cloud-First Approach to HPC and AI Workloads

Making the Most of Today’s Cloud-First Approach to Running HPC and AI Workloads With Penguin Scyld Cloud Central™

Bursting to cloud has long been used to complement on-premises HPC capacity to meet variable compute demands. But in today’s age of cloud, many workloads start on the cloud with little IT or corporate oversight. What is needed is a way to operationalize the use of these cloud resources so that users get the compute power they need when they need it, but with constraints that take costs and the efficient use of existing compute power into account. Download this special report to learn more about this topic.

Download Now

Sponsored by Penguin Solutions

Whitepaper

QCT POD- An Adaptive Converged Platform for HPC and AI

Data center infrastructure running AI and HPC workloads requires powerful microprocessor chips and the use of CPUs, GPUs, and acceleration chips to carry out compute intensive tasks. AI and HPC processing generate excessive heat which results in higher data center power consumption and additional data center costs.

Data centers traditionally use air cooling solutions including heatsinks and fans that may not be able to reduce energy consumption while maintaining infrastructure performance for AI and HPC workloads. Liquid cooled systems will be increasingly replacing air cooled solutions for data centers running HPC and AI workloads to meet heat and performance needs.

QCT worked with Intel to develop the QCT QoolRack, a rack-level direct-to-chip cooling solution which meets data center needs with impressive cooling power savings per rack over air cooled solutions, and reduces data centers’ carbon footprint with QCT QoolRack smart management.

Download Now

Sponsored by QCT

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire