November 23, 2010
Addison and Michael revisit some news items from last week's Supercomputing Conference. Read more…
November 19, 2010
Addison and Michael consider the results of the TOP500 and Green500, pick the winners and losers of SC10, and discuss the biggest news of the week. Read more…
November 19, 2010
If there was a dominating theme at the Supercomputing Conference this year, it had to be GPU computing. Read more…
November 17, 2010
Lost in the hoopla about the ascendency of China and GPGPUs in the TOP500 is the continuing saga of the InfiniBand-Ethernet interconnect rivalry. Read more…
November 16, 2010
Although the parallel programming landscape is relatively young, it's already easy to get lost in. Beside legacy frameworks like MPI and OpenMP, we now have NVIDIA's CUDA, OpenCL, Cilk, Intel Threading Building Blocks, Microsoft's parallel programming extensions for .NET, and a whole gamut of PGAS languages. And according to Intel's Tim Mattson, that's not necessarily a good thing. Read more…
November 16, 2010
NVIDIA's CUDA is easily the most popular programming language for general-purpose GPU computing. But one of the more interesting developments in the CUDA-verse doesn't really involve GPUs at all. In September, HPC compiler vendor PGI (The Portland Group Inc.) announced its intent to build a CUDA compiler for x86 platforms. The technology will be demonstrated for the first time in public at SC10 this week in New Orleans. Read more…
November 15, 2010
Data-intensive applications are quickly emerging as a significant new class of HPC workloads. For this class of applications, a new kind of supercomputer, and a different way to assess them, will be required. That is the impetus behind the Graph 500, a set of benchmarks that aim to measure the suitability of systems for data-intensive analytics applications. Read more…
November 15, 2010
SGI has made good on its promise to create a petaflop-in-a-cabinet supercomputer that can scale up to tens and even hundreds of cabinets. Developed under the code name "Project Mojo," the company has dubbed the new product Prism XL. SGI will be showcasing the system this week in their exhibit booth at the Supercomputing Conference in New Orleans. Read more…
November 15, 2010
Top seven supercomputers make it into the petaflop club. Read more…
November 14, 2010
Like every technology-based sector, high performance computing takes its biggest leaps by the force of disruptive innovation, a term coined by the man who will keynote this year's Supercomputing Conference (SC10) in New Orleans. Clayton M. Christensen doesn't know a whole lot about supercomputing, but he knows a great deal about the forces that drive it. Read more…
November 11, 2010
A short list of "can't miss" sessions at this year's Supercomputing conference. Read more…
Today, manufacturers of all sizes face many challenges. Not only do they need to deliver complex products quickly, they must do so with limited resources while continuously innovating and improving product quality. With the use of computer-aided engineering (CAE), engineers can design and test ideas for new products without having to physically build many expensive prototypes. This helps lower costs, enhance productivity, improve quality, and reduce time to market.
As the scale and scope of CAE grows, manufacturers need reliable partners with deep HPC and manufacturing expertise. Together with AMD, HPE provides a comprehensive portfolio of high performance systems and software, high value services, and an outstanding ecosystem of performance optimized CAE applications to help manufacturing customers reduce costs and improve quality, productivity, and time to market.
Read this whitepaper to learn how HPE and AMD set a new standard in CAE solutions for manufacturing and can help your organization optimize performance.
A workload-driven system capable of running HPC/AI workloads is more important than ever. Organizations face many challenges when building a system capable of running HPC and AI workloads. There are also many complexities in system design and integration. Building a workload driven solution requires expertise and domain knowledge that organizational staff may not possess.
This paper describes how Quanta Cloud Technology (QCT), a long-time Intel® partner, developed the Taiwania 2 and Taiwania 3 supercomputers to meet the research needs of the Taiwan’s academic, industrial, and enterprise users. The Taiwan National Center for High-Performance Computing (NCHC) selected QCT for their expertise in building HPC/AI supercomputers and providing worldwide end-to-end support for solutions from system design, through integration, benchmarking and installation for end users and system integrators to ensure customer success.
© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.