DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

Researchers Recreate ‘El Reno’ Tornado on Blue Waters Supercomputer

March 16, 2017

The United States experiences more tornadoes than any other country. About 1,200 tornadoes touch down each each year in the U.S. with most occurring during torn Read more…

Simulating Combustion at Exascale: a Q&A with ISC Keynoter Jacqueline Chen

March 14, 2016

At the 2016 ISC High Performance conference this June, distinguished Sandia computational combustion scientist Jacqueline H. Chen will deliver a keynote highlighting the latest advances in combustion modeling and simulation. In this interesting and informative Q&A, Chen describes the challenges and opportunities involved in preparing combustion codes for exascale machines. Read more…

U of Michigan Project Combines Modeling and Machine Learning

September 10, 2015

Although we've yet to settle on a term for it, the convergence of HPC and a new generation of big data technologies is set to transform science. The compute-pl Read more…

Argonne Team Tackles Uncertainties in Engine Simulation

August 27, 2015

As we head deeper into the digital age, computers appropriate an ever greater share of the work of designing and testing physical systems, spanning the gamu Read more…

Digital Prototyping a Mercedes

July 14, 2015

ISC 2015’s emphasis on HPC use in industry was reflected in the choice of Monday’s opening keynote speaker, Jürgen Kohler, senior manager, NVH (noise, vibr Read more…

Computer Model Addresses Fate of Missing Malaysian Airlines Flight

June 10, 2015

Forensic reconstruction from an interdisciplinary research team offers new insight into the tragic disappearance of Malaysia Airlines Flight MH370 on March 8, Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

A New Standard in CAE Solutions for Manufacturing

Today, manufacturers of all sizes face many challenges. Not only do they need to deliver complex products quickly, they must do so with limited resources while continuously innovating and improving product quality. With the use of computer-aided engineering (CAE), engineers can design and test ideas for new products without having to physically build many expensive prototypes. This helps lower costs, enhance productivity, improve quality, and reduce time to market.

As the scale and scope of CAE grows, manufacturers need reliable partners with deep HPC and manufacturing expertise. Together with AMD, HPE provides a comprehensive portfolio of high performance systems and software, high value services, and an outstanding ecosystem of performance optimized CAE applications to help manufacturing customers reduce costs and improve quality, productivity, and time to market.

Read this whitepaper to learn how HPE and AMD set a new standard in CAE solutions for manufacturing and can help your organization optimize performance.

Download Now

Sponsored by HPE

Whitepaper

Porting CUDA Applications to Run on AMD GPUs

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by AMD

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire