Tag: hpc clusters
In the previous Cluster Lifecycle Management column, I discussed best practices for choosing the right vendor to build the cluster that meets your needs. Once your team has selected a vendor and finalized the purchase of your new system, the next crucial step is deploying and validating the HPC cluster. As part of the vendor Read more…
With support from the National Science Foundation and the University of Tennessee, Knoxville, the National Institute for Computational Science (NICS) is expanding access to Beacon, its newest HPC cluster, providing researchers with a powerful research tool. Efforts are underway to optimize a number of science and engineering applications for this system utilizing both Intel Xeon Read more…
Recent tests performed at Clemson University achieved a 25 percent improvement in Apache Hadoop Terasort run times by replacing Hadoop Distributed File System (HDFS) with an OrangeFS configuration using dedicated servers. Key components included extension of the MapReduce “FileSystem” class and a Java Native Interface (JNI) shim to the OrangeFS client. No modifications of Hadoop were required, and existing MapReduce jobs require no modification to utilize OrangeFS.
Higher education and research institutes around the globe are investing in HPC clusters, yet there is an all-too-common oversight during the product acquisition process…
For the second time in five years, Appro has been tapped to provide the National Nuclear Security Administration with HPC capacity clusters for the agency’s Advanced Simulation and Computing and stockpile stewardship programs. The Tri-Lab Linux Capacity Cluster 2 award is a two-year contract that will have the cluster-maker delivering HPC systems across three of the Department of Energy’s national labs. The deal is worth tens of millions of dollars to Appro and represents the biggest contract in the company’s 20-year history.
When it comes to the power-hungry systems of the pending era of exascale, next-generation systems will need to employ “brains” not just brawn to tackle new challenges. This is a concept Bill Nitzberg of Altair’s PBS Works described to us this week as he highlighted the ways smarter management can tackle some of the greatest challenges ahead for billion-core machines.
Cubicle Clustered Computing concept aimed at HPC’s “missing middle.”
The path to lower TCO may lead to throw-away nodes.
Microsoft Research is simplifying applications written to work with a small amount of data hosted on a local client machine to data center scale, running it on HPC clusters or in a public cloud
The case for cloud computing in biotech.