Tag: cluster computing
The challenges of exascale computing were the main focus of the three keynote addresses at the IEEE Cluster 2011 conference hosted in Austin, Texas last month. The speakers, renowned leaders in cluster computing, described the obstacles and opportunities involved in building systems one thousand times more powerful than today’s petascale supercomputers.
A recent question about what to do with a new cluster generated a wealth of information from HPC users.
Univa announced today it would be acquiring the Sun/Oracle Grid Engine engineering expertise from Oracle Corp. In doing so, the company will take over stewardship of the popular open source workload manager, which, in the space of two years, has passed through three companies: Sun Microsystems, Oracle, and now Univa. Its new owners plan to support existing deployments of Grid Engine as well as develop a commercial version with added capabilities.
The Weekly Top Five covers the Intel-NVIDIA cross-licensing agreement, the arrival of a Cray supercomputer at Colorado State, advancements in the understanding of storage materials, the latest batch of AAAS Fellows, and UW-Madison’s new HPC cluster.
The tension between custom and commodity high performance computing has shaped both market approaches.
Last week’s High Performance Computing Financial Markets conference in New York gave Microsoft an opening to announce the official release of Windows HPC Server 2008 R2, the software giant’s third generation HPC server platform. It also provided Microsoft a venue to spell out its technical computing strategy in more detail, a process the company began in May.
Just asking a few pointed questions should help determine which type of HPC management platform is right for a particular HPC scenario.
QLogic intros new pass-through module; Voltaire debuts MPI offload technology.
Latest silicon from Intel, AMD and NVIDIA will change the workstation-cluster dynamic.
Cluster computing systems have caused disruptive changes in the HPC market. One consequence of the range of requirements for cluster networking is that the leading interconnects in HPC are Gigabit Ethernet (GbE), which is based on Ethernet networking standard, and InfiniBand, delivering upwards of 10X performance vs. GbE. Both show significant deployment in HPC.