Networking, Data Experts Design a Better Portal for Scientific Discovery

January 29, 2018

These days, it’s easy to overlook the fact that the World Wide Web was created nearly 30 years ago primarily to help researchers access and share scientific data. Over the years, the web has evolved into a tool that helps us eat, shop, travel, watch movies and even monitor our homes. Read more…

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

Profile of a Data Science Pioneer

June 28, 2016

As he approaches retirement, Reagan Moore reflects on SRB, iRODS, and the ongoing challenge of helping scientists manage their data. In 1994, Reagan Moore managed the production computing systems at the San Diego Supercomputer Center (SDSC), a job that entailed running and maintaining huge Cray computing systems as well as networking, archival storage, security, job scheduling, and visualization systems. At the time, research was evolving from analyses done by individuals on single computers into a collaborative activity using distributed, interconnected and heterogeneous resources. Read more…

ISC Session Preview: I/O in the Post-Petascale Era

June 23, 2015

Improving data communication performance in HPC has turned out to be one of the most difficult challenges for system designers. As a result, the topic is gettin Read more…

RDA

Paving the Way for Accelerated Data Sharing: An Interview with Francine Berman

February 27, 2015

How can we create more effective treatments for Alzheimer’s? Can we increase food security across the globe? Is there a way to more accurately predict natural Read more…

The Promise of Data-Centric Computing

November 10, 2014

Sick of big data? Not so fast. The age of the data-centric system has just begun. Tackling this subject in a recent blog is Tilak Agerwala, Vice President of Da Read more…

SDSC Launches Workflows for Data Science Center

July 10, 2014

The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, has created a new "center of excellence" focused on helping researchers mo Read more…

TACC Spurs Data-Intensive Science with Corral

October 11, 2013

Corral, the DataDirect Networks storage system installed at the Texas Advanced Computing Center (TACC), recently crossed the one petabyte mark in total data stored, and it now hosts over 100 unique data collections. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper

A New Standard in CAE Solutions for Manufacturing

Today, manufacturers of all sizes face many challenges. Not only do they need to deliver complex products quickly, they must do so with limited resources while continuously innovating and improving product quality. With the use of computer-aided engineering (CAE), engineers can design and test ideas for new products without having to physically build many expensive prototypes. This helps lower costs, enhance productivity, improve quality, and reduce time to market.

As the scale and scope of CAE grows, manufacturers need reliable partners with deep HPC and manufacturing expertise. Together with AMD, HPE provides a comprehensive portfolio of high performance systems and software, high value services, and an outstanding ecosystem of performance optimized CAE applications to help manufacturing customers reduce costs and improve quality, productivity, and time to market.

Read this whitepaper to learn how HPE and AMD set a new standard in CAE solutions for manufacturing and can help your organization optimize performance.

Download Now

Sponsored by HPE

Whitepaper

Porting CUDA Applications to Run on AMD GPUs

A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.

This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.

Download Now

Sponsored by AMD

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Resource Library

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

HPCwire