Tag: HPC

Cluster Lifecycle Management: Capacity Planning and Reporting

May 8, 2015 |

In the previous Cluster Lifecycle Management column, I discussed the best practices for proper care and feeding of your cluster to keep it running smoothly on a daily basis. In this column, we will look into the future and consider options for making sure your HPC system has the capacity to meet the needs of Read more…

AMD Refreshes Roadmap, Transitions Back to HPC

May 7, 2015 |

AMD revealed key elements of its multi-year strategy as part of its 2015 Financial Analyst Day event in New York on Wednesday. Out of the gate, CEO Lisa Su acknowledged the company’s recent challenges, pointing to a weak PC market and market share losses, before turning her attention to the game plan that AMD is Read more…

IEEE Group Seeks to Reinvent Computing as Scaling Stalls

May 6, 2015 |

Computer scientists worried about the end of computing as we know it have been banging heads for several years looking for ways to return to the historical exponential scaling of computer performance. What is needed, say the proponents of an initiative called “Rebooting Computing,” is nothing less than a radical rethinking of how computers are Read more…

Linux Widens HPC Goalposts

May 4, 2015 |

It is well known that the term “high performance computing” (HPC) originally describes the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 10^12 floating-point operations per second, and is also often used as a synonym for supercomputing. Technically Read more…

Why TACC’s New Data ‘Wrangler’ Is a Big Deal

Apr 30, 2015 |

While there’s been a lot of activity around the coming crop of “exascale-relevant” supercomputers, the HPC landscape is also shifting to become more data-aware. Perhaps no system reflects this transition better than Wrangler, the I/O-optimized open science system from Dell and EMC that debuted earlier this month at the Texas Advanced Computing Center (TACC). In a presentation Read more…

First Use of HPC to Pick Vendors in Army Procurement Program

Apr 29, 2015 |

Over the next 25 to 40 years, the U.S. Army plans to replace its entire fleet of vertical lift helicopters, a project costing billions of taxpayer dollars. For the first time, HPC modeling was decisive in reducing the number of hopeful competing vendors – AVX Aircraft, Bell Helicopter, Karem Aircraft, and Sikorsky/Boeing – from four Read more…

25th New Mexico Supercomputing Challenge Winners Announced

Apr 23, 2015 |

This week more than 200 New Mexico students and their teachers gathered together at Los Alamos National Laboratory for the 25th annual New Mexico Supercomputing Challenge expo and awards ceremony. The project-based event, open to high school, middle school and elementary school students in New Mexico, is geared to teaching a wide range of skills, Read more…

Merle Giles’ Book Dives into Global Best Practices for Industrial HPC

Apr 20, 2015 |

Nothing teaches like experience. A new book from co-editors Merle Giles and Dr. Anwar Osseyran, Industrial Applications of High-Performance Computing: Best Global Practices, takes readers on a lesson-filled tour of HPC deployment in industry. Giles and Osseyran provide an overview of HPC technology, an examination of HPC practices worldwide, and a compilation of HPC case Read more…

Application Readiness at the DOE, Part II: NERSC Preps for Cori

Apr 17, 2015 |

In our second video feature from the HPC User Forum panel, “The Who-What-When of Getting Applications Ready to Run On, And Across, Office of Science Next-Gen Leadership Computing Systems,” we learn more about the goals and challenges associated with getting science applications ready for the coming crop of Department of Energy (DOE) supercomputers, which in addition to being five-to-seven times faster than Read more…

Application Readiness at the DOE, Part I: Oak Ridge Advances Toward Summit

Apr 16, 2015 |

At the 56th HPC User Forum, hosted by IDC in Norfolk, Va., this week, three panelists from major government labs discussed how they are getting science applications ready for the coming crop of Department of Energy (DOE) supercomputers, which in addition to being five-to-seven times faster than today’s fastest big iron machines, constitute significant architectural changes. Titled “The Who-What-When of Getting Applications Ready to Read more…