November 16, 2012
NCSA's Blue Waters system is one of the fastest supercomputers in the world, but it won't be appearing on the TOP500 list, nor will it be taking part in the HPC Challenge awards. HPCwire spoke with Project Director Bill Kramer to get the full story on this important decision. Read more…
November 15, 2010
Data-intensive applications are quickly emerging as a significant new class of HPC workloads. For this class of applications, a new kind of supercomputer, and a different way to assess them, will be required. That is the impetus behind the Graph 500, a set of benchmarks that aim to measure the suitability of systems for data-intensive analytics applications. Read more…
November 4, 2010
Given the recent ascent of the GPU-powered Tianhe-1A system to the top of the supercomputing heap, a current paper from Department of Computer Science at the University of Warwick should be of particular interest to those in the market for a petascale supercomputer. Essentially their study asks the question: As an organization, should I commit to a platform based on general-purpose GPUs or an IBM Blue Gene? Read more…
November 2, 2010
There is a growing feeling that merely taking the latest processor offerings from Intel, AMD or IBM will not get us to exascale in a reasonable time frame, cost budget, and power constraint. One avenue to explore is designing and building more specialized systems, aimed at the types of problems seen in HPC, or at least at the problems seen in some important subset of HPC. Of course, such a strategy loses the advantages we've enjoyed over the past two decades of commoditization in HPC; however, a more special purpose design may be wise, or necessary. Read more…
Whether an organization chooses a cloud for general business needs or a highly tailored workload, the spectrum of offerings and configurations can be overwhelming. To help you navigate the various cloud options available today, we're breaking down your options, exploring pros and cons, and sharing ways to keep your options open and your business agile as you execute your cloud strategy.
Researchers in academic labs and commercial R&D groups continue to need more compute capacity, which means leveraging the latest innovations in HPC technologies as well as an assortment of resources to meet the unique needs of different workloads. Increasingly, systems based on Arm processors are stepping into that role, offering low power consumption and strategic advantages for HPC workloads.
Whether it's for fraud detection, personalized medicine, manufacturing, smart cities, autonomous vehicles and many other areas, advanced-scale computing has exploded beyond the realm of academia and government and into the private sector. And with data-intensive workloads on the rise, commercial users are turning to HPC-based infrastructure to run the AI, ML and cognitive computing applications that their organizations depend on.
© HPCwire. All Rights Reserved. A Tabor Communications Publication
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.