Visit additional Tabor Communication Publications
April 28, 2011
AMD has been plugging away at the x86 business for nearly 20 years now. In 1982 the company crafted a licensing deal with Intel to become an alternative source of x86 processors for the burgeoning personal computer market -- specifically to satisfy IBM's demands for two sources of CPUs for its new PC offerings. And although AMD has parlayed this into a multi-billion dollar business, Intel has never allowed its smaller competitor to be anything but a secondary choice for x86 parts.
As of the first quarter of 2011, Intel owns 81 percent of the x86 processor space, with AMD a distant second with 18.2 percent. The numbers are not much different than they were in 2010 and are not likely to be much different in 2012, 2013, 2014, and so on. The only thing that has maintained AMD's viability is the sheer size of the market and Intel's willingness to keep AMD as a co-dependent partner.
But as AMD proved when it acquired GPU-maker ATI in 2006, it is not adverse to adopting new processor technology when it suits the company's purpose. Now there's talk that AMD may be considering a second leap of faith, this time in the direction of the ARM architecture -- a low-power CPU design that has become the darling of the mobile computing space. An analysis this week by Peter Clarke for EE Times spells out why a fully-ARMed AMD makes a lot of sense.
To begin with, Clarke argues that playing catch-up with Intel is not the greatest recipe for success, especially now that AMD has gotten out the fab business and can thus no longer compete on that playing field. And -- although Clarke didn't point this out -- AMD has lost much of its x86 architectural differentiation when Intel adopted AMD's system design of integrated memory controllers and a HyperTransport like interconnect in QPI. At least for AMD's non-GPU business, this forces the company to shrink its margins by trying to undercut its larger rival on price.
But the shorter term rationale for hopping on the ARM bandwagon is to expand AMD's footprint beyond its x86 base. Today the company doesn't have viable low-power offerings for the rapidly-growing tablet and mobile device space, and ARM could make for a relatively pain-free path into those markets. And with Microsoft's endorsement of ARM as a Windows platform, AMD could also create non-x86 offerings for traditional desktops and notebooks. And the company wouldn't have to jettison its x86 portfolio to do any of this.
Clarke notes that for the cost of a few million dollars and a few percent royalty per processor, AMD would be able to jumpstart its ARM business. From his EE Times article:
....Now it may be that a hard-pressed AMD was simply unable to create designs for all the different sectors and performance profiles in a proliferating PC landscape. But that alone is reason to get off the x86 treadmill and let ARM do some of the heavy lifting. And having missed the tablet computer boom ARM licensing would provide the fastest way for AMD to get a chip to market and make up lost ground.
Beyond that, ARM appears headed to server-land, although according to ARM Holdings president and co-founder Tudor Brown, not for another five years or so. But to my mind, this is yet another reason for AMD to start its ARM adventure now. When ARM Holdings comes up with its anticipated 64-bit design, it's only a hop, skip and a jump to a fully-fledged server implementation.
In fact, if ARM-based servers are to become a reality, somebody will have to develop those chips, and AMD has the right combo of system engineering smarts and business relationships to make it happen. Some of the happiest days at AMD were when it outflanked Intel with its 64-bit x86 Opteron designs (at a time when Intel believed its new Itanium CPU would take the high ground 64-bit servers). It wouldn't surprise me if there are some execs at AMD who are dreaming about a repeat performance recast under the ARM umbrella.
A future ARM-based server landscape may be a little trickier to navigate than the x86 one, though. Since the architecture can be licensed by anyone, there's nothing to prevent any server maker from coming up with its own SoC designs. In fact IBM and Fujitsu already license ARM for other purposes. For that matter, even ARM chip vendors like Samsung and Texas Instruments could start churning out server-grade designs if they could come up with the business.
Calxeda (formerly Smooth-Stone) is getting ready to launch an ARM server based on the current generation 32-bit CPU designs. A 2U enclosure will house up to 120 ARM quad-core nodes, while chewing up only about 5 watts per node (including memory). The company claims a single 2U box will deliver the same throughput as a rack of vanilla x86 servers, reducing power requirements by 90 percent.
For the HPC crowd, NVIDIA has announced its intentions to marry ARM CPUs with it GPU technology. The GPU maker's "Project Denver" will glue the two architectures together, AMD Fusion-like, and create a family of processors for personal computers, workstations, servers and supercomputers. In fact, if AMD perceives NVIDIA's ARM formula for heterogenous processors as a threat to its own Fusion processor plans, that would be yet another reason to hedge its bets with ARM.
Fueling some this AMD-ARM speculation is this week's announcement that ARM fellow and vice president Jem Davies will deliver a keynote at the next AMD Fusion Developer Summit in June. Davies is on tap to discuss ARM's legacy of heterogeneous computing, its future strategy, and its support of OpenCL.
It's likely this is nothing more than AMD looking to highlight industry support for hetero computing and OpenCL. But it's worth noting that ARM products, especially ones with the GPGPU-ready Mali graphics hardware (Mali-400 MP and Mali-604), ostensibly compete with AMD's Fusion chips. At the Many-core and Reconfigurable Supercomputing Conference held early this month in the UK, Dr. Krisztian Flautner, vice president of research & development at ARM, said the company would be releasing a reference design board that includes an ARM CPU plus a Mali 604 GPU, along with a full OpenCL 1.1 implementation that targets both computing units. So one might wonder why AMD would spotlight such technology at one of its events.
In any case, I think Clarke's analysis is on target and I've got to believe the AMD digerati are mulling over an ARM play. In retrospect, it's a little surprising they haven't already pulled the trigger. After 20 years of playing second fiddle to Intel, it's time for the company to strike out on its own.
Posted by Michael Feldman - April 28, 2011 @ 2:25 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.