Visit additional Tabor Communication Publications
November 15, 2012
SAN DIEGO, Nov. 15 – StackIQ today announced the immediate availability of StackIQ Enterprise HPC, the newest addition to their comprehensive cluster management product line. Powered by Rocks+, and building on StackIQ's successful Enterprise Data offering, the new product provides the very latest in HPC cluster management software.
StackIQ has updated their Rocks+ HPC product to embrace the many enterprise-grade capabilities already available in their Enterprise Data solution. Customers familiar with the company's Rocks+ HPC product will be delighted to know that the new software is a direct upgrade of their previous release – with several new and enhanced capabilities.
StackIQ Enterprise HPC is based on the latest enterprise grade Linux – Red Hat Enterprise Linux and CentOS 6.3 – and features a new easier to use graphical user interface, while retaining the powerful command line interface Rocks+ power users know and love.
In addition to the new GUI, nearly every module has been updated, from the HPC Roll (which contains a preconfigured OpenMPI environment), to the Intel, Dell, Univa Grid Engine (UGE), Moab, Mellanox, Open Grid Scheduler / Grid Engine (GE), and CUDA Rolls.
Administrators will find it easier to track cluster health using new advanced cluster diagnostics tools, while developers will find it easier than ever to develop and debut Rolls using features like the filtered "profiles" tab in the GUI.
StackIQ also added advanced firewall configuration to enhance the security of HPC clusters, making them more robust and able to be integrated into today's enterprise data center environments.
"We are thrilled to bring this major update to our HPC customers in time for the annual SC12 conference," said Tim McIntire, President and co-founder of StackIQ. "By bringing the enterprise features of our Enterprise Data product to the HPC products, we've improved the HPC product, while making it easier for those building hybrid HPC/Hadoop clusters to get their work done."
StackIQ (formerly "Clustercorp") is a leading provider of software that automates the deployment and management of Big Infrastructure. Based on open-source Rocks cluster management software, StackIQ's Rocks+ product simplifies the installation and management of the hardware and software that provides the infrastructure for large-scale environments having hundreds or thousands of servers supporting Big Data, Analytics, or High Performance Computing. StackIQ is located in La Jolla, California.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.