Who are the big winners for 2016? Come get a look at who is making a difference and showing why #HPCmatters.
Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!
June 3, 2010
Returning to ISC after a hiatus of several years and viewing the event from the vantage point of an industry analyst, the show appears to have made a quantum leap in terms of size and sophistication of the exhibit, and degree and intensity of business activity. Read more…
June 2, 2010
Even as we gain a footing in the era of petaflops computing, we have set in motion the exploration of the undiscovered domain of exaflops computing. This year has seen the launching of multiple programs to develop the concepts, architectures, software stack, programming models, and new families of parallel algorithms necessary to enable the practical realization of exaflops capability prior to the end of this decade. Read more…
June 1, 2010
Chipmaker Intel is reviving the Larrabee technology for the HPC market, with plans to bring a manycore coprocessor to market in the next few years. During the ISC'10 opening keynote, Kirk Skaugen, vice president of Intel's Architecture Group and general manager of the Data Center Group, announced the chipmaker is developing what they're calling a "Many Integrated Core" (MIC) architecture, which will be the basis of a new line of processors aimed squarely at high performance technical computing applications. Read more…
May 31, 2010
A Chinese supercomputer called Nebulae, powered by the latest Fermi GPUs, grabbed the number two spot on the TOP500 list announced earlier today. The new machine delivered 1.27 petaflops of Linpack performance, yielding only to the 1.76 petaflop Jaguar system, which retained its number one berth. Read more…
May 28, 2010
Dr. Ashwini Nanda has been at the center of some of the most cutting-edge HPC projects and initiatives in the world. In this interview, Dr. Nanda talks about high performance computing in India, how he sees the industry today, and what led him to start up his company, HPC Links. Read more…
HPC may once have been the sole province for huge corporations and national labs, but with hardware and cloud resources becoming more affordable even small and mid-sized companies are taking advantage.
Between the demands of the data deluge and hardware advancements in both CPUs and GPUs alike, it’s no surprise that large HPC clusters are seeing rapid growth as a part of today’s Big Data escalation.
Today’s leading organizations are dealing with larger data sets, higher volume and disparate data sources, and the need for faster insights. Don't fall behind to your competitors – discover big data made simple as we make the case for advanced-scale computing.
High performance workloads, big data, and analytics are increasingly important in finding real value in today's applications and data. Before we deploy applications and mine data for mission and business insights, we need a high-performance, rapidly scalable, resilient infrastructure foundation that can accurately, securely, and quickly access data from all relevant sources. Red Hat has technology that allows high performance workloads with a scale-out foundation that integrates multiple data sources and can transition workloads across on-premise and cloud boundaries.
© HPCwire. All Rights Reserved. A Tabor Communications Publication
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.