Sectors » Oil & Gas
Successful oil and gas exploration today requires ever-faster upstream processing. To shorten the compute time needed to get actionable information, organizations need to reduce survey processing run times from months to weeks and be capable of scaling to handle the explosive data growth.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Kepler_GPU_die_small.bmp” alt=”” width=”81″ height=”78″ />NVIDIA has introduced its first Kepler-generation GPU product for high performance computing, and revealed some of the inner working of the new architecture. The announcement took place at the kickoff of the company’s GPU Technology Conference taking place this week in San Jose, California.
The short-list of HPC cloud providers just got a little longer. Infrastructure-as-a-Service provider SoftLayer has added high-end NVIDIA Tesla GPUs to its line of dedicated servers.
Adaptive Computing recently released a new version of Moab 7.0, both the HPC Suite (basic and enterprise editions) and also the Cloud Suite. While the workload management vendor has made important enhancements to its portfolio, what’s even more interesting is how these offerings fit into an increasingly cloud-based IT environment. This in-depth interview with Adaptive Computing CEO Robert Clyde and Chad Harrington, Adaptive’s vice president of marketing, shows how the company has leveraged its HPC roots to strengthen its cloud offerings.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/oil_and_gas_hpc_workshop.bmp” alt=”” width=”111″ height=”72″ />Leaders from the oil and gas industry and the high performance computing and information technology industry, as well as academics and representatives from national laboratories, met at Rice University in Houston, Texas, March 1 for the 5th annual Rice Oil and Gas HPC Workshop.
Supercomputer-maker Cray is helping oil and gas companies benefit from the most-advanced reservoir modeling approach yet. Called Permanent Reservoir Monitoring, or PRM, the technique requires innovative data warehousing technology and data analysis techniques.
Scientists at GE Global Research are using the multi-petaflop Titan supercomputer at Oak Ridge National Laboratory to study the way that ice forms as water droplets come in contact with cold surfaces. They are working to develop “icephobic” materials that prevent ice formation and accumulation.
GE’s Steve Pavlosky explains how high-performance computing technology helps users in the power industry, where changes happen very fast. Whether it’s changes in demand or a power failure, the control system has to be able to react very quickly.
Anybody who drives one of Ford’s recent vehicles spends a little less money on gasoline thanks to HPC work the carmaker undertook with Oak Ridge National Laboratory, where more than one million processor hours were spent getting a handle on the complex fluid dynamics governing airflow under the hood.
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Off the Wire
SANTA CLARA, Calif., Aug. 19 — DataDirect Networks (DDN) today announced impressive revenue traction for the first six months of 2014, demonstrating a strong trajectory of profitable growth for the company heading into the second half of the year. The company added more than 20 net new petabyte-class customers including one of world’s largest banks, a leader Read more…
Aug. 5 — As global demand for hydrocarbon-based fuel continues to grow, particularly in Asia, and as Western economies recover, there is more pressure on oil companies to invest in new reserves, sweat the assets of mature fields and even revisit what were once considered to be exhausted wells. Exploration techniques and technology advancement has Read more…
DALLAS, Tex., July 23 — IBM today announced that it is making high performance computing (HPC), as part of technical computing, more accessible through the cloud for clients grappling with big data and other computationally intensive activities. A new option from SoftLayer, an IBM Company, will provide industry standard InfiniBand networking technology to connect SoftLayer Read more…
July 3 — Eni has put into operation its second major HPC system. The new supercomputer has an innovative approach based on the usage of accelerators to implement a so called “hybrid cluster architecture.” It comprises 1500 IBM iDataPlex dx360 M4 nodes, built on more than 30,000 processing cores, each equipped with two NVIDIA Tesla GPU Read more…
MARKHAM, Ontario, June 19 — Univa, the Data Center Automation Company, announced today that Univa Grid Engine has been certified and integrated with Schlumberger’s ECLIPSE industry-reference reservoir simulator software version 2014.1. This integration will provide users with an easy-to-use platform to use Univa’s workload manager in conjunction with the ECLIPSE reservoir simulator. Univa’s core product, Univa Read more…