In the previous Cluster Lifecycle Management column, I discussed best practices for choosing the right vendor to build the cluster that meets your needs. Once your team has selected a vendor and finalized the purchase of your new system, the next crucial step is deploying and validating the HPC cluster. As part of the vendor Read more…
Numascale offers a price breaker for shared memory systems by offering integration of a simple add-on card to commodity servers. The hardware is now deployed in system with up to more than 1700 cores and the memory addressing capability is virtually unlimited. The technology has a set of interesting advantages that will catch the interest of innovative developers.
For the past few decades, the norm among the large government labs, academic research facilities and top commercial sites has been to deploy one large system per site at a time. However, more recently growing diversity of applications and end user community requirements, combined with non-overlapping budget and expanding technology lifecycles, has been driving a multi-cluster environment approach.
There’s a lot going on in the networks of HPC clusters, and selecting the right network fabric, equipment, and topology is important to ensuring good performance for given applications. A “one size fits all” approach rarely works, and architects will do well to tailor the network to the needs of the application.
The superior performance, cost-effectiveness and flexibility of open-source software has made it the predominant choice of HPC professionals. However, the complexity and associated cost of deploying and managing open-source clusters threatens to erode the very cost benefits that have made it compelling in the first place.
Don’t have a super budget? You can still own a premier high performance supercomputer with proven technology and reliability. New entry-level configurations and options enable you to configure the optimal balance of price, performance, power, and footprint for your unique and exacting requirements.
For the second time in five years, Appro has been tapped to provide the National Nuclear Security Administration with HPC capacity clusters for the agency’s Advanced Simulation and Computing and stockpile stewardship programs. The Tri-Lab Linux Capacity Cluster 2 award is a two-year contract that will have the cluster-maker delivering HPC systems across three of the Department of Energy’s national labs. The deal is worth tens of millions of dollars to Appro and represents the biggest contract in the company’s 20-year history.
This year HPC in the Cloud presented its first annual Editors’ Choice Awards in Santa Clara, California. Platform Computing was selected for one such award due to its long-standing commitment to grids, clusters and the evolution of both into cloud computing.
Supercomputing goes mainstream.
Last week’s High Performance Computing Financial Markets conference in New York gave Microsoft an opening to announce the official release of Windows HPC Server 2008 R2, the software giant’s third generation HPC server platform. It also provided Microsoft a venue to spell out its technical computing strategy in more detail, a process the company began in May.