Visit additional Tabor Communication Publications
September 20, 2010
Moab unified automation intelligence technology enables orchestration of Windows HPC Server and other leading operating systems in a single HPC or cloud environment
PROVO, Utah, Sept. 20 -- Adaptive Computing, the company behind the Moab unified automation intelligence technology, today announced the certification of Moab Adaptive HPC Suite with Microsoft Windows HPC Server 2008 R2, released today, and the company's participation in the Windows HPC Server Launch Partner program. This announcement extends collaboration between the two companies, which have worked together for years co-delivering large HPC and cloud installations, including South Africa's Centre for High Performance Computing, Rocky Mountain Supercomputing Centers, Baker Hughes and other customers in the oil and gas and manufacturing industries.
Adaptive Computing's solutions, powered by Moab, deliver intelligent governance that empowers customers to optimally consolidate and virtualize resources, allocate and manage applications, improve service levels, and reduce operational costs. Moab Adaptive HPC Suite enables multiple operating systems and applications to run on a single computational system within existing operational and system-administration costs through dynamic, policy-driven scheduling and optional automatic systems configurations. This capability, known as "dual boot," has traditionally been complex and labor-intensive, but Moab intelligently automates the process, minimizing the operational and management costs associated with dedicated systems administrators.
"Microsoft's relationship with Adaptive Computing in the HPC and cloud markets is valuable; Microsoft's popular operating system and servers open the world of complex computing to a much wider audience, further expanding our market opportunities. Organizations that deploy Moab and Windows HPC Server 2008 R2 will gain additional workload throughput from existing clusters, resulting in increased ROI, improved utilization and greater application performance," said Scott Hurst, director of alliances and business development at Adaptive Computing.
Windows HPC Server 2008 R2 makes it easier and more affordable to put the power of supercomputing within reach of more analysts, engineers, and scientists, giving them the computational resources they need to make better decisions, fuel product innovation, speed research and development and accelerate time to market.
"By using advanced management solutions like Moab Adaptive HPC Suite together with Windows HPC Server 2008 R2, customers who have traditionally used Linux in their high-performance computing clusters have been able to increase their utilization and extend the reach of their HPC services to more users," said Bill Hamilton, director, Technical Computing, Microsoft Corp. "It's allowed them to create dynamic HPC environments that span multiple operating systems, increase access to more applications, and increase the breadth of HPC users for their clusters."
Moab Adaptive HPC Suite is available through Microsoft's Certificate Transfer Agreement (CTA). Packaged with Windows HPC Server 2008 R2 and multiple Linux OS support, it offers customers an attractive low-risk and very low-cost option to cover both current and future end-user support needs.
About Adaptive Computing
Adaptive Computing provides intelligent automation software for HPC, datacenter and cloud environments. The company's infrastructure intelligence solutions, powered by Moab, deliver policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing products manage the world's largest computing installations and are the preferred intelligent automation solutions for the leading global HPC and datacenter vendors. For more information, call 801-717-3700 or visit www.adaptivecomputing.com.
Source: Adaptive Computing
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.