Visit additional Tabor Communication Publications
October 03, 2012
Oct. 3 — Data processing, data management and data storage provider, OCF plc, has massively expanded its HPC On-Demand service, enCORE. It can now deliver up to 8,000 cores of vital processing power to a wide range of business sectors in the UK and beyond.
The expansion of the service follows a newly signed agreement between OCF and the Science and Technology Facilities Council’s (STFC) Hartree Centre. The Centre is a research collaboratory in association with IBM launched in 2012, which was formed as a result of a £37.5 million investment by the UK government.
The enCORE service will use additional processing power from The Hartree Centre’s “Blue Wonder”, a new IBM System x iDataPlex cluster comprising 8,192 Intel Xeon E5-2670 processor cores. Tests show that the Blue Wonder iDataPlex cluster can achieve 206.3 teraFLOPS. Its 48 TB shared memory capacity also makes it the largest shared memory cluster in the UK. BlueWonder was installed and configured by OCF in partnership with IBM.
First launched in November 2010, the enCORE service was the first in the UK to operate with a commercial organisation harnessing available processing power from academic and research-based high performance server clusters. The service has since been used successfully by firms such as Engys, Actiflow, CVIS, BHR and Renuda to meet on-going and temporary “burst” requirements for additional processing power.
“enCORE has enabled our clients to re-evaluate their HPC computing strategy. Ease of use, exceptional technical support, scalability and price / performance have been key factors in them deciding to use an off-premise HPC cluster. We’ve seen this service directly lead to firms winning contracts they could not otherwise have delivered,” says Jerry Dixon, HPC on Demand business development manager, OCF plc. “Our expansion of the enCORE service will now enable larger businesses with significant and complex HPC workloads to utilise this flexible facility, and to deliver tangible business benefits.”
Dr David Kelsall, senior consultant at fluid engineering consultancy, BHR Group comments: “OCF’s enCORE service has enabled us to cope with peaks in demand for capacity when we are undertaking simultaneous consultancy and research projects. It is a very easy service to use, with an uncomplicated and simple structure that doesn’t require any previous HPC knowledge to operate.”
He adds: “The enCORE service allows us to tackle much larger calculations, up to four times greater than we can manage in-house. As a result, we aren’t constrained from taking on more projects than our in-house computing resources allow. It also helps to free up the time of our engineers, who can work on other projects with the extra capacity provided by OCF and can expect a quicker turnaround of analyses results with the enCORE service. We derive a lot of comfort that OCF is providing a UK based ‘cloud service’, so we know exactly where our data is processed. The enCORE service is cost effective as we can tap into it when required, without the need to tie up a lot of capital for the occasional use of HPC resources.”
Zvi Tannenbaum, owner of independent software vendor, Advanced Cluster Systems, says, “OCF’s enCORE HPC On-Demand service has acted as a platform enabling us to test and refine our SET™ (Supercomputing Engine Technology™). SET, an MPI(Message Passing Interface) based library, enables mainstream software writers to quickly and cheaply apply MPI parallelization to their software, turning it to a high performance version without code changes, making it suitable for running efficiently on multicore machines and clusters such as OCF’s. SET is designed to take the complexity out of MPI parallel programming, making it more readily available to SMEs or other organisations with no in-depth knowledge of MPI.
MPI is the de facto standard for communication among processes in supercomputing centres, and is fully supported by OCF. Now fully tested and operational, OCF is the first HPC On-Demand service provider to offer SET run-time environment in the UK. OCF has demonstrated exceptional technical support and responsiveness during the installation process and continues to deliver professional and timely advice and support.”
The enCORE service
· As part of the HPC on Demand service, OCF is responsible for pre-sales qualification with customers to discover required volumes of processing power and benchmarks to demonstrate that enCORE can run specific HPC applications efficiently
· The service is scalable and suitable for SMEs through to major corporate and academic / research users
· By working with OCF, customers receive an SLA-driven service, commercial terms and commercial account management, strong technical resource, and first class technical support and assistance for maximum efficiency
· OCF also holds a number of pre-installed and optimised application codes ready for use with the service; it can work with Independent Software Vendors to get application licensing for the term of a contract with customers, or it can potentially access the end users licences directly, thus ensuring adherence to the ISV’s licensing terms
· enCORE uses the latest Intel and NVIDIA GPU processor hardware for maximum performance
· Data transfer between the customer and enCORE is handled by enCORE’s simple secure web interface or, in the case of extremely large files, by secure shuttle service
· Contracts with OCF are flexible, and use of enCORE involves a small annual subscription plus a cost per core hour used; interested parties should contact OCF for pricing.
Source: OCF plc
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.