Visit additional Tabor Communication Publications
January 26, 2007
The big IT news this week is the blossoming romance between Sun and Intel. This new alliance will ripple through many parts of the industry, including HPC. If the Valentine on Jonathan Schwartz's blog is any indication of things to come, this could be a real sweetheart deal for both organizations. But once the swooning is over, Sun and Intel will have to deliver in the marketplace.
There should be lots of opportunities to do so. As reported in our feature article this week, Sun and Intel have agreed to work together to build Xeon-based servers and enhance support of Solaris on Xeon platforms. Intel gets to sell chips into Sun boxes and expand its opportunities with the Solaris/Xeon platform. Sun gets to tap into Intel accounts, achieve an equal footing with the other chip-agnostic OEMs selling both Intel and AMD hardware, and also expand its opportunities with the Solaris/Xeon platform.
Perhaps one of the more interesting results from the alliance will be an eight socket Xeon server. Although the first Sun Xeon-based servers planned for 2007 will be one, two and four sockets, the intention is to eventually deliver an eight socket system. At that point, Sun would achieve parity with it current Opteron-based Sun Fire line-up. This won't be as easy to accomplish with Intel's front side bus technology as it was with AMD's coherent HyperTransport technology, but scaling up systems is one of Sun's recognized talents.
Intel seems genuinely excited to have Sun build such a machine, since none of the other Tier 1 OEMs have shown much interest. Presumably HP would not do it because an eight socket Xeon would bump into their higher end Integrity line based on the Itanium processor. IBM would also see a "Big Iron" Xeon platform as a threat to its mainframe offerings. And currently Dell has no aspirations to mainframe-level computing.
Intel itself has to maneuver carefully here. Its high-end Itanium processors are targeted for mission critical enterprise systems that require high performance, high reliability and high availability. But OEMs like Sun and Fabric7 are starting to build x86-based systems that aspire to these same capabilities. If users can't make a clear distinction between scaled up 64-bit x86 SMP systems and Itanium systems, then Intel's got a problem.
Sun has no such conflict. Being able to sell an eight socket Xeon system alongside the current Opteron counterpart -- the Sun Fire X4600 -- just expands options for Sun's customers looking for scaled up SMP machines and fat node clusters based on x86 hardware. With the fat node model, the Tokyo Institute of Technology deployed 655 Sun Fire X4600 servers to create one of the most powerful supercomputers in the world. An Intel version may be possible in the not-too-distant future.
The good news at Sun barely paused for a breath this week. On Tuesday, the company announced its first quarterly profits in more than a year, due mainly to strong Opteron server sales and increased uptake of Solaris. And if that wasn't enough, the announcement of a $700 million equity investment by KKR Private Equity Investors reflected a solid endorsement from the investor community.
AMD Returns to Earth
Meanwhile, Advanced Micro Devices (AMD) was heading in the opposite direction. The Intel rival posted a $574 million loss in Q4 as a result of acquisition costs related to the ATI merger and dropping revenue on its x86 processor sales. The company actually sold more volume, but a "price war" with Intel narrowed margins significantly. The announcement of the Sun-Intel alliance completed a somber week for the folks at AMD.
A few IT analysts were already dumping on AMD for its poor financial performance, but if you remove the ATI merger expenses, the company would have netted a $63 million profit in Q4. The company's real challenge is to regain the momentum on the processor front. The quad-core 'Barcelona' chip could give AMD a boost when it's delivered in the second half of 2007. In the long run, AMD has to make the ATI merger work. One thing to look forward to is the upcoming Fusion processor, an architecture that will incorporate x86 and GPU cores on a single die. This would present an asymmetric challenge to Intel processors, and would also be resistant to x86 price warfare. The first Fusion implementation is planned for 2009.
This week, an Israeli company, Plurality Ltd, announced that they're developing a 64-core RISC (SPARC) processor. The evaluation hardware consists of a 16-core processor implemented on an FPGA chip; it comes with a software development kit that includes a compiler, simulator and debugger. The company claims that their Task Oriented Programming model enables developers to easily convert their applications to take advantage of the multi-core hardware. However, the company's website states: "The only requirement from the programmer is to do a simple partitioning of the algorithm into tasks." Of course. What could be easier?
The product appears to be targeted for high performance embedded apps such as communications, signal processing, HD video processing, robotics, medical imaging, automotive systems, etc. The 64-core commercial version is due out in the third quarter of this year, with a later version sporting 256 cores.
Wyoming Goes Super
The high plains of Wyoming might seem like an unlikely place for a supercomputing center, but this is just what the National Center for Atmospheric Research (NCAR) has in mind. Boulder-based NCAR, along with the University Corporation for Atmospheric Research (UCAR), is partnering with the University of Wyoming and the state government to build a new supercomputing center in Cheyenne.
The facility will provide hundreds of teraflops of capability, which will be devoted mostly to climate and weather modeling applications. Spare cycles will be used to perform options pricing on Dick Cheney's stock portfolio once he retires to his home state of Wyoming in 2009. Just joking. I kid the Vice President.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - January 25, 2007 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.