March 15, 2017
They say a dog year is equivalent to about seven human years, but the average supercomputer's lifespan is even shorter due mainly to the economics of powering a Read more…
May 22, 2013
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about.. Read more…
March 27, 2013
The Intelligence Advanced Research Projects Activity (IARPA) is putting out some RFI feelers in hopes of pushing new boundaries with an HPC program. However, at the core of their evaluation process is an overt dismissal of benchmarks, including floating operations per second (FLOPS). Read more…
November 22, 2012
UV 2 system can create heat maps of tweets during hurricanes and elections. Read more…
November 21, 2012
Last week at SC12 in Salt Lake Convey pulled the lid off its MX big data-driven architecture designed to shine against graph analytics problems, which were at the heart of the show’s unmistakable data-intensive computing thrust this year. The new MX line is designed to exploit massive degrees of parallelism while efficiently handling hard-to-partition big data applications. Read more…
October 25, 2012
Big data is all the rage these days. It is the subject of a recent Presidential Initiative, has its own news portal, and, in the guise of Watson, is a game show celebrity. Big data has also caused concern in some circles that it might sap interest and funding from the exascale computing initiative. So, is big data distinct from HPC – or is it just a new aspect of our evolving world of high-performance computing? Read more…
April 26, 2012
In order to help research institutions capitalize on the growing availability of high-bandwidth networks to manage their growing data sets, the DOE's Energy Sciences Network, known as ESnet, is working with the scientific community to encourage the use of a network design model called the “Science DMZ.” Leading the development of this effort is Eli Dart, a network engineer with previous experience at Sandia National Laboratories and the National Energy Research Scientific Computing Center. In this interview, Dart talks about the nature of the project and explains how such an architecture can help researchers. Read more…
April 24, 2012
Convey Computer has launched its newest x86-FPGA "hybrid-core" server. Dubbed HC-2, it represents the first major upgrade of the system since the company introduced the HC-1 product back in 2008. The new offering promises much better performance, but with a similar price range as the original system. Read more…
Today, manufacturers of all sizes face many challenges. Not only do they need to deliver complex products quickly, they must do so with limited resources while continuously innovating and improving product quality. With the use of computer-aided engineering (CAE), engineers can design and test ideas for new products without having to physically build many expensive prototypes. This helps lower costs, enhance productivity, improve quality, and reduce time to market.
As the scale and scope of CAE grows, manufacturers need reliable partners with deep HPC and manufacturing expertise. Together with AMD, HPE provides a comprehensive portfolio of high performance systems and software, high value services, and an outstanding ecosystem of performance optimized CAE applications to help manufacturing customers reduce costs and improve quality, productivity, and time to market.
Read this whitepaper to learn how HPE and AMD set a new standard in CAE solutions for manufacturing and can help your organization optimize performance.
A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.
This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.