Visit additional Tabor Communication Publications
August 12, 2010
Although Oracle hasn't made any announcements about its commitment to the high performance computing market, it has become pretty clear to me that the company is no longer pursuing the HPC business it inherited from Sun Microsystems. Prior to the merger in January 2010, Sun had an array of HPC offerings (servers, storage and middleware) and was surging back into the market with its Constellation-class blade server product. But from all appearances, the HPC product set and talent it acquired from Sun will be absorbed Borg-like into the Oracle collective.
Even that, perhaps, is a bit too generous. Oracle is apparently shedding HPC staff, in a very un-Borg-like manner. A recent article in The Register reported that Oracle had let go much of the HPC sales team, with the remainder tasked to sell the company's Exadata data warehousing appliances. I, myself, have spoken with two credible sources that told me HPC engineering talent is also being axed. Although this has been rumored to have been going on for some time, the recent RIF last week was said to cut particularly deep.
In truth, Oracle never really had an HPC business to lose. Although the Sun IP and product set were acquired in the merger deal, Sun itself hadn't closed any new deals several months prior to the Oracle buyout. All of the HPC systems deployed in late 2009 were the result of contracts that had been signed much earlier. With the pipeline now flushed, it's highly unlikely we'll see any more HPC system deployments from Oracle.
Sun's flagship HPC platform, the 6048 Blade Constellation system, is still listed on Oracle's website, but it looks like it only comes with blades using last year's x86 silicon. For example, the only Intel blade available for the 6048 chassis is the X6275, which comes with Nehalem EP (Xeon 5500 series) CPUs. As of today, there is no Westmere EP (Intel Xeon 5600) hardware available for this enclosure. The X6270 M2 server, which does have the Westmere chips, is only being offered for Oracle's enterprise-class 6000 system. The sole AMD Opteron blade available with the 6048 is the quad-socket, six-core X6440 module. No Magny-Cours (6100 series 8- and 12-core Opterons) server blades are even listed on Oracle's site.
Essentially this means that all of Sun's Constellation customers don't have an upgrade path for the machines they bought within the last three years. This includes some big name supercomputers like TACC's "Ranger" and Sandia National Labs' "Red Sky" in the US system, Jülich's "JuRoPA2" in Germany, and KISTI's "Tachyon" in Korea. A number of smaller Constellation systems are in the same boat.
It looks like Lustre-based HPC storage is also on the way out, if not already gone. Although the storage hardware products -- StorageTek arrays, J4000 series expansion arrays and the Sun Fire X4540 storage server -- are still listed, it appears that the Lustre file system is no longer being offered on these platforms. Since many of the Constellation supercomputers sold over the past couple years came bundled with Lustre storage, those customers would be hard pressed to expand their existing disk capacity without switching file systems.
Even the term HPC is being stripped out of existing products in some cases. For example, Sun HPC ClusterTools has been renamed to Oracle Message Passing Toolkit. Despite the name change, the toolkit is still downloadable, but I would say its future support for HPC is uncertain.
If I still haven't convinced you that Oracle is cutting HPC from its lineup, consider that the company has no exhibit at the Supercomputing Conference (SC10) in November, and as far as I can tell, is offering no presentations. Given that this is the largest HPC exhibition of the year, this should be a clear signal that Oracle is going to be leaving the teraflopping and petaflopping to others.
Posted by Michael Feldman - August 12, 2010 @ 5:48 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.