Nvidia Bolsters Omniverse for HPC, Announces NOAA-Lockheed Partnership

November 14, 2022

Over the past months, Nvidia has put a spotlight on its OVX hardware – purpose-built systems aimed at its Omniverse digital twins platform. Now, at SC22, Nvid Read more…

Coursera Offers HPC Techniques to Scientific Computing

April 10, 2013

Randall J. Leveque, Professor of Applied Mathematics at the University of Washington in Seattle, will be conducting a free course that brings the principles of parallelism in high performance computers to those in scientific computing. Read more…

When Time Is of the Egress: Optimizing Your Transfers

July 31, 2012

Traditionally running scientific workloads in AWS provides a diverse toolkit that allows researchers to easily sling data around different time zones, regions, or even globally once the data is inside of the infrastructure sandbox. However, getting data in and out of AWS has historically been more of a challenge. Cycle Computing's Andrew Kaczorek and Dan Harris offer some helpful tips on optimizing ingress and egress transfers. Read more…

Software Carpentry Revisited

July 18, 2011

Software engineering is still something that gets too little attention from the technical computing community, much to the detriment of the scientists and engineers writing the applications. Greg Wilson has been on a mission to remedy that, mainly through his efforts at Software Carpentry, where he is the project lead. HPCwire asked Wilson about the progress he's seen over the last several years and what remains to be done. Read more…

Cloud-Driven Tools from Microsoft Research Target Earth, Life Sciences

October 19, 2010

Last week at their eScience Workshop at the University of California, Berkeley Microsoft Research announced two key technological progress points related to their Azure cloud. The advancements are currently serving researchers in ecological studies as well as biology and further demonstrate the potential of their cloud offering in further scientific computing projects. Read more…

Amazon Adds HPC Capability to EC2

July 13, 2010

The announcement this morning that Amazon is offering Cluster Compute Instances for EC2 specifically for the needs of HPC users might just be that long-awaited game-changer when it comes to the viability of scientific computing in the public cloud. While it is fresh from a private beta and the results are promising, only time will tell to what degree users will snatch up this opportunity to have supercomputing power on demand. Read more…

Supernova Factory Employs EC2, Puts Cloud to the Test

July 9, 2010

Researchers from Berkeley Lab are looking at different options available for scientific computing users to move beyond physical infrastructure, including the possibility of deploying public clouds. A recently-published study of Amazon EC2's handling of data from the Nearby Supernova Factory sheds light on putting large-scale scientific computing into the cloud in practice and in theory. Read more…

Will Public Clouds Ever Be Suitable for HPC?

June 27, 2010

Since the primary consideration in HPC is performance, it stands to reason that it's no easy task to convince the scientific computing community that the public cloud is a viable option. Accordingly, a handful of traditional HPC vendors are refining their solutions to bridge the cloud performance chasm that exists in EC2, making the cloud more hospitable for HPC. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

A New Standard in CAE Solutions for Manufacturing

Today, manufacturers of all sizes face many challenges. Not only do they need to deliver complex products quickly, they must do so with limited resources while continuously innovating and improving product quality. With the use of computer-aided engineering (CAE), engineers can design and test ideas for new products without having to physically build many expensive prototypes. This helps lower costs, enhance productivity, improve quality, and reduce time to market.

As the scale and scope of CAE grows, manufacturers need reliable partners with deep HPC and manufacturing expertise. Together with AMD, HPE provides a comprehensive portfolio of high performance systems and software, high value services, and an outstanding ecosystem of performance optimized CAE applications to help manufacturing customers reduce costs and improve quality, productivity, and time to market.

Read this whitepaper to learn how HPE and AMD set a new standard in CAE solutions for manufacturing and can help your organization optimize performance.

Download Now

Sponsored by HPE

Whitepaper

Porting CUDA Applications to Run on AMD GPUs

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by AMD

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire