Visit additional Tabor Communication Publications
August 12, 2010
Although Oracle hasn't made any announcements about its commitment to the high performance computing market, it has become pretty clear to me that the company is no longer pursuing the HPC business it inherited from Sun Microsystems. Prior to the merger in January 2010, Sun had an array of HPC offerings (servers, storage and middleware) and was surging back into the market with its Constellation-class blade server product. But from all appearances, the HPC product set and talent it acquired from Sun will be absorbed Borg-like into the Oracle collective.
Even that, perhaps, is a bit too generous. Oracle is apparently shedding HPC staff, in a very un-Borg-like manner. A recent article in The Register reported that Oracle had let go much of the HPC sales team, with the remainder tasked to sell the company's Exadata data warehousing appliances. I, myself, have spoken with two credible sources that told me HPC engineering talent is also being axed. Although this has been rumored to have been going on for some time, the recent RIF last week was said to cut particularly deep.
In truth, Oracle never really had an HPC business to lose. Although the Sun IP and product set were acquired in the merger deal, Sun itself hadn't closed any new deals several months prior to the Oracle buyout. All of the HPC systems deployed in late 2009 were the result of contracts that had been signed much earlier. With the pipeline now flushed, it's highly unlikely we'll see any more HPC system deployments from Oracle.
Sun's flagship HPC platform, the 6048 Blade Constellation system, is still listed on Oracle's website, but it looks like it only comes with blades using last year's x86 silicon. For example, the only Intel blade available for the 6048 chassis is the X6275, which comes with Nehalem EP (Xeon 5500 series) CPUs. As of today, there is no Westmere EP (Intel Xeon 5600) hardware available for this enclosure. The X6270 M2 server, which does have the Westmere chips, is only being offered for Oracle's enterprise-class 6000 system. The sole AMD Opteron blade available with the 6048 is the quad-socket, six-core X6440 module. No Magny-Cours (6100 series 8- and 12-core Opterons) server blades are even listed on Oracle's site.
Essentially this means that all of Sun's Constellation customers don't have an upgrade path for the machines they bought within the last three years. This includes some big name supercomputers like TACC's "Ranger" and Sandia National Labs' "Red Sky" in the US system, Jülich's "JuRoPA2" in Germany, and KISTI's "Tachyon" in Korea. A number of smaller Constellation systems are in the same boat.
It looks like Lustre-based HPC storage is also on the way out, if not already gone. Although the storage hardware products -- StorageTek arrays, J4000 series expansion arrays and the Sun Fire X4540 storage server -- are still listed, it appears that the Lustre file system is no longer being offered on these platforms. Since many of the Constellation supercomputers sold over the past couple years came bundled with Lustre storage, those customers would be hard pressed to expand their existing disk capacity without switching file systems.
Even the term HPC is being stripped out of existing products in some cases. For example, Sun HPC ClusterTools has been renamed to Oracle Message Passing Toolkit. Despite the name change, the toolkit is still downloadable, but I would say its future support for HPC is uncertain.
If I still haven't convinced you that Oracle is cutting HPC from its lineup, consider that the company has no exhibit at the Supercomputing Conference (SC10) in November, and as far as I can tell, is offering no presentations. Given that this is the largest HPC exhibition of the year, this should be a clear signal that Oracle is going to be leaving the teraflopping and petaflopping to others.
Posted by Michael Feldman - August 12, 2010 @ 5:48 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.