Visit additional Tabor Communication Publications
March 19, 2009
In a week when Cisco, IBM, Sun Microsystems, Intel and AMD were all featured prominently in the news cycle, I got the feeling that the whole industry might be on the cusp of a realignment.
Reinventing the datacenter, Cisco-style
On Monday, Cisco formally announced its intentions to become a major player in the systems and services market. In a webinar telecast that delivered more industry buzzwords per minute than I've heard in years, Cisco managed to bring all the hot trends in IT -- virtualization, cloud computing, unified fabrics, multicore, open source, and so on -- under its new Unified Computing System (UCS) architecture. In a nutshell, UCS will be implemented as Cisco-branded blade servers with embedded management software that virtualizes compute and storage systems. The company is promoting UCS as a way for datacenters to consolidate infrastructure under a single hardware/software platform -- a sort of proto-cloud, if you will.
The IT press was generally skeptical of the marketing blitz, especially since no actual products have hit the streets yet. But Cisco is too big and successful a company not to be taken seriously. IBM and HP, who also are developing their own versions of next-generation datacenter products, are certainly trying to figure out how to react. Up until now Cisco was a key partner that companies like IBM and HP relied upon for a range of networking gear. With the UCS play, Cisco has made those relationships much more complex.
As far as the reason for Cisco wanting to expand beyond its network equipment-making roots, just follow the money. As companies like Arista, Blade Network Technologies and others entered the network biz over the last several years and were willing to sell product at lower profit margins than Cisco (historically around 65 percent), Cisco probably determined its business model wouldn't sustain itself indefinitely. The logical response was to diversify or move higher up the food chain. With UCS, it can do both.
So does UCS have a role for HPC applications? In this week's issue, our feature article takes a look at how the upcoming Cisco blade servers might play in the high performance computing arena.
Can IBM swallow Sun without getting burned?
With even bigger ramifications for the HPC crowd, on Wednesday the Wall Street Journal broke a story that IBM is looking to acquire Sun Microsystems for around $6.5 billion. IBM and Sun aren't commenting, but reporters and analysts are already writing like it's a done deal. If the acquisition goes through, a lot of products and technologies are going to get remixed under the IBM umbrella.
Adding Sun's Java, Solaris, and MySQL technologies to IBM's stable might provide some interesting new synergies, but I'll leave those speculations to more knowledgeable writers. My concern is mainly for products and technologies used by the HPC crowd, and here there's potential for a blood bath.
It's the server realm, especially at the high end, that might provide the most difficulties. For two companies that already have a full range of HPC servers and systems, it's hard to imagine anything but consolidation. In particular, the x86-based server lines -- IBM's System X and Sun's Sun Fire x64 -- would be destined for a collision. IBM could end up taking the best parts of both architectures and developing a new x86 line, but in that scenario, the company would be obliged to ease the pain for current users by offering a reasonable migration path.
IBM might be tempted to keep the Sun's SPARC/UltraSPARC technology around for awhile, inasmuch as Big Blue currently doesn't have a CMT solution that can handle 64 threads per socket. But it's hard to imagine IBM continuing to invest in the technology for the long term, given that Intel and AMD will eventually offer similar capabilities for a fraction of the cost. On the other hand, IBM could sell off the SPARC IP to someone else. The logical buyer would be Fujitsu, who builds its own line of SPARC-based systems, some of which are supercomputers.
Also of concern to HPC'ers is the fact that Lustre, the open source parallel file system acquired by Sun in 2007, and GPFS, IBM's proprietary parallel file system, are not on speaking terms. In this case, IBM would do itself a favor and keep both file systems, since killing off Lustre would alienate its HPC customers who buy IBM gear and then use Lustre as the file system. Another alternative would be to make GPFS open source and, if feasible, look toward incorporating Lustre as a subset. Better yet, IBM could resell Lustre to some other OEM or storage vendor who has more interest in its success.
And finally, there's the ultimate problem: Jonathan Schwartz would need to get crew-cut. That might be a deal breaker.
Intel cries patent foul over AMD's foundry spin-off
With the big IT firms fighting over turf in the datacenter, it's comforting that some things never change. Yes, the Hatfields and the McCoys of Silicon Valley were at it again this week, with Intel accusing AMD of breaching their 2001 patent cross-licensing agreement.
The rationale for the complaint has to do with AMD's relationship with Globalfoundries, the recent spin-off from AMD's chip manufacturing operations. Intel is claiming that since Globalfoundries does not qualify as a subsidiary of AMD, the x86 cross-licensing agreement cannot be extended to it. Intel is also saying that the structure of the deal between AMD and Advanced Technology Investment Company (the majority owner of Globalfoundries) breaches a confidential portion of that agreement.
Of course, AMD is claiming this is all just the usual sound and fury from Intel, and that its larger rival is trying to distract attention away from its own monopolistic behavior. AMD's position is that it has contributed more than 50 percent of the assets of Globalfoundries, so it does qualify as a subsidiary. In any case, AMD believes the patent agreement doesn't really apply here anyway. AMD told Ars Technica's Joel Hruska that the patent agreement is not necessary for the company or its foundry to design and manufacture its chips. Writes Hruska:
AMD made a strong distinction between a technology license agreement, in which one company furnishes another with a vital ingredient or special sauce necessary to the function of a product, and a patent license agreement, in which two companies agree not to sue each other for intellectual property (IP) infringement. AMD's point here is rather simple: AMD neither needs nor receives any technological help from Intel when designing x86 processors. In the event that the cross-license agreement between the two companies were to be canceled, there's nothing stopping AMD from continuing to build its current products or designing future ones.
The company went on to say that since it holds a number of patents itself, especially in relation to 64-bit x86, integrated memory controllers, and x86 multicore, Intel would be asking for trouble if it initiated a patent fight. AMD might not gain much legal ground from reminding everyone that Intel's current product lineup has benefitted mightily from AMD architectural innovation, but it sure makes Intel look like a corporate bully.
And let's face it, Intel didn't become a great chip company because of the x86 architecture, but in spite of it. Except for an accident of history, we'd all be computing on PowerPC RISC processors and Motorola might be where Intel is today. But the industry went for VHS instead of Betamax, and the rest, as they say, is history.
Posted by Michael Feldman - March 19, 2009 @ 5:07 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.