Visit additional Tabor Communication Publications
November 13, 2012
PROVO, Utah, Nov. 13 – Adaptive Computing, the largest provider of private cloud management and High-Performance Computing (HPC) workload management software, today announced that the COSMOS Supercomputer Consortium, founded by Stephen Hawking and part of the Science and Technology Facilities Council DiRAC High Performance Computing facility, has chosen Moab HPC Suite 7.2 to manage its groundbreaking scientific computing workloads. Moab will coordinate jobs and allocate computing resources for research in cosmology and astrophysics, including simulations of the origins of the Universe and science exploitation of satellite experiments. This research will utilize a new SGI UV 2000 supercomputer with 1,856 Intel Xeon E5 cores and 1,891 Intel Xeon Phi cores. Adaptive Computing has worked closely with Intel and SGI to enable Moab to manage and schedule this cutting-edge system.
DiRAC, the Distributed Research utilizing Advanced Computing facility, is the leading provider of high-performance computing in the UK. With its new updated systems, the COSMOS@DiRAC supercomputer will be providing supercomputing services not only to DiRAC’s consortium of educational institutions, but also to other organizations throughout the UK. With a more diverse group of customers, scheduling and accounting effectively is especially critical. With a total of more than 4,500 cores once the new system is fully operational, scheduling and management is a top priority. Moab will provide them with the ability to specifically schedule what cores will be used for jobs, in order to meet SLAs for flagship projects.
“We had scientists using custom-built tools to manage jobs, but as we expand and support more sophisticated workloads and detailed accounting this has become too complex and time-consuming a task. Moab HPC Suite’s ease of use has streamlined our scheduling requirements, allowing us to accommodate our expanding user group Moab enables us to maintain flexibility and enjoy more rigorous accounting abilities with Moab Accounting Manager and to fine-tune policies easily in real time,” said Andrei Kaliazin, COSMOS System Manager, University of Cambridge. “Research in fundamental cosmology is fast moving and internationally competitive. We have to adapt our flexible operating model rapidly, and we need a company breaking new ground to support the very latest HPC technologies, thus we selected Adaptive Computing for our workload management software,” added Professor Paul Shellard, COSMOS Director.
“With the introduction of the Intel Xeon Phi technology, we’re seeing a new generation of supercomputers that are faster and more agile than ever,” noted Robert Clyde, CEO of Adaptive Computing. “Adaptive is proud to offer Intel Xeon Phi capability in its latest version of Moab HPC Suite, to allow today’s HPC centers to take full advantage of Intel Xeon Phi cores without the need for extensive reprogramming of their systems.”
The COSMOS@DiRAC upgrade is made possible through funding from the Science and Technologies Facilities Council, a public body of the Department of Business, Innovation and Skills. The SGI UV2000 system will be the first of its kind operating in the world with Intel Xeon Phi co-processors integration.
About Adaptive Computing
Adaptive Computing is the largest provider of High-Performance Computing (HPC) workload management software and manages the world’s largest cloud computing environment with Moab, a self-optimizing dynamic cloud management solution and HPC workload management system. Moab, a patented multidimensional intelligence engine, delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing offers a portfolio of Moab cloud management and Moab HPC workload management products and services that accelerate, automate, and self-optimize IT workloads, resources, and services in large, complex heterogeneous computing environments such as HPC, data centers and cloud.
Source: Adaptive Computing
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.