Visit additional Tabor Communication Publications
November 21, 2012
Last week CyrusOne, Dell, and R Systems launched an HPC cloud solution designed to meet the needs of oil and gas companies. The new service is being housed in CyrusOne's "Sky for the Cloud" platform at its West Houston-based colocation facility.
The oil and gas space has long relied on HPC to analyze geological data in order to enhance operational decisions aimed at increasing time to market and improving profitability.
In a November 16 press release, CyrusOne Chief technology Officer Kevin Timmons noted that "Sky for the Cloud creates an ecosystem to efficiently facilitate the generation, analysis, and sharing of all the geophysical data locally and statewide."
As with other cloud systems, the promised benefits are reduced capital and operational expenditures and the ability to easily scale to meet times of peak demand. The cloud model also frees up resources to be spent on the company's main business drivers.
"We see the combination of HPC and cloud technologies as an incredibly powerful solution with tremendous customer benefit," reported Nnamdi Orakwue, vice president of Dell Cloud. "Customers who need immediate, high-performing computing solutions for shorter time frames can quickly realize revenue opportunities. Dell continues to invest in cloud enabling solutions to help our customers achieve faster business results."
Dell and R Systems are operating under a "project partner" alliance to offer their cloud service for both short and longer-term contracts that span anywhere from one day to one year. They say that the CyrusOne datacenter will help them achieve a high degree of performance, reliability and availability. Sky for the Cloud was designed for optimum power usage effectiveness (PUE) and the facility's 2N architecture is said to enable the highest degree of power redundancy.
Although the HPC cloud will initially focus on the needs of the oil and gas industry, the partners plan to support complex workloads from other industries as well, such as finance, healthcare, life sciences, manufacturing and media.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.