Who are the big winners for 2016? Come get a look at who is making a difference and showing why #HPCmatters.
Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!
June 18, 2008
On Wednesday Microsoft announced something big in a little number. NCSA put one of its systems, the 9000+ core Abe system, on the latest Top500 list at number 23. This is the highest ranking Windows HPC system to date and has important implications for Microsoft and the community. Read more…
June 18, 2008
There is no other way to characterize this year: 2008 will be remembered as "the year" -- the year that one petaflops was achieved in Linpack performance. It is a milestone that has been anticipated for almost a decade and a half, and one that was accomplished through the synthesis of two big trends that have emerged as the driving forces for HPC in the last few years -- multicore and heterogeneous computing. Read more…
June 18, 2008
There wasn't much suspense on which machine would nab the top spot on the June TOP500 list, which was released earlier today. Last week, IBM and LANL had already let everyone know that Roadrunner crossed the petaflop finish line first. Nonetheless, the new list portends some big changes ahead for supercomputing. Read more…
June 13, 2008
The 23rd annual International Supercomputing Conference (ISC) will bring together many of the world's leading experts in high performance computing this week in Dresden, Germany. HPCwire got an opportunity to ask conference chair Prof. Hans Meuer about the upcoming conference and his thoughts on the direction of supercomputing. Read more…
June 9, 2008
Petaflop. Sure it's just a number, but it's a big number. On June 10, IBM announced that its Roadrunner supercomputer reached a record-breaking one petaflop -- a quadrillion floating point operations per second -- using the standard Linpack benchmark. It is the first general-purpose computer to reach this milestone. Read more…
HPC may once have been the sole province for huge corporations and national labs, but with hardware and cloud resources becoming more affordable even small and mid-sized companies are taking advantage.
Between the demands of the data deluge and hardware advancements in both CPUs and GPUs alike, it’s no surprise that large HPC clusters are seeing rapid growth as a part of today’s Big Data escalation.
Today’s leading organizations are dealing with larger data sets, higher volume and disparate data sources, and the need for faster insights. Don't fall behind to your competitors – discover big data made simple as we make the case for advanced-scale computing.
High performance workloads, big data, and analytics are increasingly important in finding real value in today's applications and data. Before we deploy applications and mine data for mission and business insights, we need a high-performance, rapidly scalable, resilient infrastructure foundation that can accurately, securely, and quickly access data from all relevant sources. Red Hat has technology that allows high performance workloads with a scale-out foundation that integrates multiple data sources and can transition workloads across on-premise and cloud boundaries.
© HPCwire. All Rights Reserved. A Tabor Communications Publication
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.