HPCwire Unveils Editors’ Superlative Awards

February 14, 2023

Each November, HPCwire’s readers and editors recognize dozens of individuals and organizations across more than 20 very serious award categories, celebrating Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stages and will take about four more years. If ultimately awarded for construction, the LCCF will serve as the leading facility for advanced computing for the U.S. academic open science community... Read more…

TACC Looks to ‘Horizon’ System for Its Leadership-Class Computing Facility

April 14, 2022

During a talk for the Ken Kennedy Institute’s 2022 Energy High Performance Computing Conference, Dan Stanzione, executive director of the Texas Advanced Compu Read more…

Argonne Researchers Use AI-Enabled Supercomputing for COVID-19 Drug Discovery

April 24, 2020

The world’s supercomputers are engaged in an urgent scavenger hunt, poring over as many molecules as possible in the hopes of finding one that bonds to COVID- Read more…

New HPC-Enabled COVID-19 Model Corrects ‘Critical Statistical Flaws’ with IHME Model

April 23, 2020

The United States is waiting with bated breath to see its crucial coronavirus curves – daily cases, hospitalizations, and deaths – flatten, peak and begin t Read more…

TACC Simulates Dangers of Low Earth Orbit to Astronauts

April 4, 2018

Much remains unknown about the hazards the space environment will present to future space travelers. Recent simulations conducted by researchers from Texas A&am Read more…

New Approach to Computationally Designing Drugs for GPCRs

September 8, 2016

Modeling protein interactions with drugs has long been computationally challenging. One obstacle is these interactions often take relatively long to occur and conventional molecular dynamics simulation is insufficient. This week a group of researchers, using several EXSEDE supercomputers, report a hybrid in silico-experimental approach that shows promise as a drug design tool for use with G protein-coupled receptors (GPCRs) Read more…

TACC Director Lays Out Details of 2nd-Gen Stampede System

June 2, 2016

With a $30 million award from the National Science Foundation announced today, the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (UT Austin) will stand up a second-generation Stampede system based on Dell PowerEdge servers equipped with Intel "Knights Landing" processors, next-generation Xeon chips and future 3D XPoint memory. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

How to Save 80% with TotalCAE Managed On-prem Clusters and Cloud

Many organizations looking to meet their CAE HPC requirements focus on the HPC on-premises hardware or cloud options. But one surprise that many find is that the bulk of their HPC total cost of ownership (TCO) comes from the complexity of integrating HPC software with CAE applications and in perfectly orchestrating the many technologies to use the hardware and CAE licenses optimally.

This white paper discusses how TotalCAE can significantly reduce TCO by offering turnkey, on-premises HPC systems and public cloud HPC solutions specifically for CAE simulation workloads that include integrated technology and software. The solutions, which TotalCAE fully manages, have allowed its clients to deploy hybrid HPC environments that deliver significant savings of up to 80%, faster-running workflows, and peace of mind since their entire solution is managed by professionals well-versed in HPC, cloud, and CAE technologies.

Download Now

Sponsored by TotalCAE

Whitepaper

Streamlining AI Data Management

Five Recommendations to Optimize Data Pipelines

When building AI systems at scale, managing the flow of data can make or break a business. The various stages of the AI data pipeline pose unique challenges that can disrupt or misdirect the flow of data, ultimately impacting the effectiveness of AI storage and systems.

With so many applications and diverse requirements for data types, management systems, workloads, and compliance regulations, these challenges are only amplified. Without a clear, continuous flow of data throughout the AI data lifecycle, AI models can perform poorly or even dangerously.

To ensure your AI systems are optimized, follow these five essential steps to eliminate bottlenecks and maximize efficiency.

Download Now

Sponsored by DDN

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

SUBSCRIBE for monthly job listings and articles on HPC careers.

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire