Visit additional Tabor Communication Publications
June 15, 2011
“Heterogeneous computing” and “open standards” have been the key phrases resonating throughout the halls at the AMD Fusion Developer Summit this week in Bellevue, Washington. We have been onsite to get a sense of how AMD’s vision of heterogeneous computing is playing out for developers and to better understand what’s on the horizon for OpenCL. Along the way, we’ve been able to pick up some interesting news and insights that bridge the world of consumer technology and HPC and get a broader sense of AMD, and now ARM’s strategy.
ARM Fellow and VP of Technology, Jem Davies, told the audience of nearly 700 developers during his keynote that although it might seem strange that AMD put ARM on center stage, these two areas are bringing the companies to communion.
In the video below, we pulled some highlights from Davies’ keynote address that went beyond mobile computing and addressed ARM’s role in the server space and beyond.
Like many others at the event this week, he too used heterogeneous computing and open standards as the platform for his talk, but reeled in the concept of power efficiency in the systems of the future. He reminded everyone that ARM isn’t just a “one trick pony” and certainly isn’t only focused on mobile devices. He pointed to the problems data centers are having as they attempt to cram more power into overheated, increasingly expensive spaces. Davies noted that for some time now, data centers are spending far more on power in and cooling out than on hardware—a trend that he expects to see continue for the short term.
The second half of the video highlights some of his views on the concept of mergers between GPU and CPU.
Despite all the discussion about open standards, there was one item along those lines that Davies didn’t address at all—one, you could argue, that would have been critical given the fact that he was addressing an audience of developers. In his entire forty-minute speech, not once did we hear mention of OpenCL in any context. When an audience member asked him about this during the question and answer period, he simply said he didn’t address it because it was a given. He concluded with the very general statement, “A world of open standards will win in the end.”
On a side note, it’s always interesting to get a firsthand view into the business perspectives of a technology executive. During his speech, Davies discussed the concept of energy efficiency via heterogeneous computing, noting that if “IP designers and developers can solve that problem [energy], everything else is easy. We will all get rich, and I foresee a world of sunshine and drinks with little parasols in them.”
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.