Tag: hpc clusters

How to Deploy and Validate Your Cluster

Jan 13, 2015 |

In the previous Cluster Lifecycle Management column, I discussed best practices for choosing the right vendor to build the cluster that meets your needs. Once your team has selected a vendor and finalized the purchase of your new system, the next crucial step is deploying and validating the HPC cluster. As part of the vendor Read more…

NICS Tackles Big Science with Beacon

Jun 16, 2014 |

With support from the National Science Foundation and the University of Tennessee, Knoxville, the National Institute for Computational Science (NICS) is expanding access to Beacon, its newest HPC cluster, providing researchers with a powerful research tool. Efforts are underway to optimize a number of science and engineering applications for this system utilizing both Intel Xeon Read more…

Accelerate Hadoop MapReduce Performance using Dedicated OrangeFS Servers

Sep 9, 2013 |

Recent tests performed at Clemson University achieved a 25 percent improvement in Apache Hadoop Terasort run times by replacing Hadoop Distributed File System (HDFS) with an OrangeFS configuration using dedicated servers. Key components included extension of the MapReduce “FileSystem” class and a Java Native Interface (JNI) shim to the OrangeFS client. No modifications of Hadoop were required, and existing MapReduce jobs require no modification to utilize OrangeFS.

Simplifying Cluster Management…

Apr 22, 2013 |

Higher education and research institutes around the globe are investing in HPC clusters, yet there is an all-too-common oversight during the product acquisition process…

Appro Comes Up Multi-Million Dollar Winner in HPC Procurement for NNSA

Jun 8, 2011 |

For the second time in five years, Appro has been tapped to provide the National Nuclear Security Administration with HPC capacity clusters for the agency’s Advanced Simulation and Computing and stockpile stewardship programs. The Tri-Lab Linux Capacity Cluster 2 award is a two-year contract that will have the cluster-maker delivering HPC systems across three of the Department of Energy’s national labs. The deal is worth tens of millions of dollars to Appro and represents the biggest contract in the company’s 20-year history.

A New Generation of Smarter, Not Faster, Supercomputers

May 19, 2011 |

When it comes to the power-hungry systems of the pending era of exascale, next-generation systems will need to employ “brains” not just brawn to tackle new challenges. This is a concept Bill Nitzberg of Altair’s PBS Works described to us this week as he highlighted the ways smarter management can tackle some of the greatest challenges ahead for billion-core machines.

Intel Pilots HPC Workstation Clustering Program

Sep 15, 2010 |

Cubicle Clustered Computing concept aimed at HPC’s “missing middle.”

Disposable HPC

May 26, 2010 |

The path to lower TCO may lead to throw-away nodes.

Focus on Distributed Computing at Microsoft Research

May 7, 2010 |

Microsoft Research is simplifying applications written to work with a small amount of data hosted on a local client machine to data center scale, running it on HPC clusters or in a public cloud

Is the Future of High-Performance Computing for Life Sciences Cloudy?

Jan 28, 2010 |

The case for cloud computing in biotech.