Getting to the root of how things work has informed and progressed all aspects of scientific discovery. As computers and applications grow in complexity, seemingly poised to enter a new phase beyond the limits of Moore’s Law and CMOS technology, enlightening how they work best is paramount. With new resources from a growing list of industry partners, the Center for Advanced Technology Evaluation – known as CENATE – a computing proving ground at Pacific Northwest National Laboratory supported by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research, is rapidly expanding its capabilities to assist the high-performance computing community.
Seventy-one years ago, on July 16, 1945, an incredible explosion lit up the New Mexico night sky. This was the Trinity Test, the world’s first nuclear detonation, and it marked the beginning of the Nuclear Age. It also ushered in the age of supercomputers, which essentially began with weapons science at Los Alamos National Laboratory (LANL). Now a new Trinity, a next generation Cray XC supercomputer is about to take center stage to help the national security labs achieve their primary mission – to provide the nation with a safe, secure and effective nuclear deterrent.
Once again, storage system supplier DataDirect Networks has the top market share – roughly 70 percent – in the TOP500, said the company today. This is the eighth consecutive year DDN has been the top storage system supplier to the Top500, according to Molly Rector, DDN CMO. Storage is the fast growing segment of the Read more…
Capturing the sparkle, wit, and selective skewering in Thomas Sterling’s annual closing ISC keynote is challenging. This year was his 13th, which perhaps conveys the engaging manner and substantive content he delivers. Like many in the room, Sterling is an HPC pioneer as well as the director of CREST, the Center for Research in Extreme Scale Technologies, Indiana University. In his ISC talk, Sterling holds up a mirror to the HPC world, shares what he sees, and invites all to look in as well and see what they may.
IDC presented its annual HPC Update at ISC yesterday. As usual it was a whirlwind tour encompassing HPC market data, technology trends, new IDC initiatives, announcement of the ISC16 Innovation Award recipients, and an update on IDC’s DOE-funded study to demonstrate HPC’s ROI.
Hewlett Packard Enterprise (HPE), now about eight months into its transition as a separate entity, retained the prestige of fielding the most systems of any vendor on the Top500 list announced at the ISC2016. HPE had 127 systems (25.4 percent) though the number was down from 155 just six months ago. Other ISC news included Read more…
Jack Dongarra, one of today’s most distinguished HPC leaders, is adding two awards to his long list. The Association for Computer Machinery (ACM) recently honored Dongarra with the High Performance Parallel and Distributed Computing Achievement Award at the annual High Performance and Distributed Computing Conference in Kyoto, Japan, while the Institute of Electrical and Electronics Engineers (IEEE) will bestow him with the Super Computing (SC) 2016 Test of Time Award at its conference in November.
SC15 was sort of a muted launch party for OpenHPC – the nascent effort to develop a ‘plug-and-play’ software framework for HPC. There seemed to be widespread agreement the idea had merit, not a lot of knowledge of details, and some wariness because Intel was a founding member and vocal advocate. Next week, ISC16 will mark the next milestone for OpenHPC, which has since grown into a full-fledged Linux Foundation Collaborative Project and today released version 1.0.1 of OpenHPC (build and test tools).
The old adage “you cannot improve what you do not measure” is fresh again in the age of ubiquitous data. When considering the challenges of exascale computing, power is right at the top of the list and the major leadership-class centers want to make sure they’re doing everything they can to manage the demands of power today – which can run as high as 10 MW at peak for the largest machines – and in the coming exascale era, when the number could be three times that high. At loads of this magnitude, the largest HPC facilities need to have all the relevant power data within arm’s reach.
Today, the community of HPC-in-the cloud solution providers is fairly limited. Giant hyperscalers – notably Amazon (AWS), Google (GCP) and Microsoft (Azure) – remain at the core and keep expanding their HPC resources. Circling around them are HPC ‘services specialists’ plugging clients into cloud providers and striving to ease delivery of HPC-in-the cloud to make good on the promise of improved efficiencies and cost reductions. Last week a new HPC Software as a Service (SaaS) player popped up on AWS marketplace – Alces Flight, a UK-based company with roots as an HPC integrator.