This week at the AMD Fusion Developer Summit in Bellevue, Wash., I sat down for a chat about high performance computing and clouds with Margaret Lewis from the company’s server and software division. While she was careful to point out that cloud computing is nothing new, especially for high performance computing where the concept sprouted from so long ago, she did suggest that the technologies have matured to a point where the community that invented clouds is now able to better exploit them.
We avoided conversations about definitions and generalities wrapped in marketing, since after all, one tends to get enough of this when it comes to conversations about cloud computing. Instead, we cut right to the chase to find out what makes AMD (or any other chipmaker for that matter) invested in clouds — and what AMD is doing to see that clouds are suitable for HPC.
One of the more salient features of our conversation this week was about the role of their Opteron processors. As you’ll hear below in the clip from our interview, AMD’s strategy for clouds is to provide more real cores that can handle more virtual machines, transactions and computation — all within a defined power envelope that emphasized memory bandwidth, low memory latency and large memory footprints cost effectively.
Pitches aside, AMD has taken an interesting approach to creating an optimized product for cloud computing datacenters. As she says above after being asked what differentiates AMD, they took a different path, avoiding hyperthreading technology since one ends up sharing with the logical processors which can add bottlenecks. She said their focus is on developing real cores, not logical cores. This is apparent with their bulldozer architecture set to roll out in a few months with its core-boosting redesign that is set to emphasize effectiveness and efficiency.
One of the sections of our discussion that didn’t make it to the final cut was how AMD worked with Microsoft as it built its Azure technology. Lewis said that at the time, AMD was selected because they were the only ones offering the virtualization technology needed for building a cluster to run a complex software stack (database, multiple applications, rich middleware, etc.). This technology called RVI offered Microsoft a way to map memory to allow them to handle virtual memory of a virtual machine to the virtual memory of the hypervisor (keeping in mind the hypervisor needs to keep all the virtual memory of all virtual machines as well as its own memory).
She said that this workaround allowed some of the mapping to be done at the hardware level, which relieved some of the complexity for some of the virtualization software, freeing up capability to run these ultra-complex stacks. She said that Microsoft is still using some of this technology to run some of its own cloud-based apps. As she said, “this is a good success story for us because it shows what you can do with off-the-shelf commercial technology after reworking it to fit into today’s cloud environments.”
For more about AMD’s general high performance computing roadmap and more HPC-specific questions, see a companion feature at HPCwire featuring Lewis.