In this week’s hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.
A giant leap in bone structure research paves the way for advances in osteoporosis treatment; details from UCSD’s Research CyberInfrastructure (RCI) Program reveal what PIs really want; and a cloud computing programming model puts the focus on predictable performance. Plus GPU-related research and more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/knights_corner_small.JPG” alt=”” width=”105″ height=”87″ />As NVIDIA’s upcoming Kepler-grade Tesla GPU prepares to do battle with Intel’s Knight Corner, the companies are busy formulating their respective HPC accelerator stories. While NVIDIA has enjoyed the advantage of actually having products in the field to talk about, Intel has managed to capture the attention of some fence-sitters with assurances of high programmability, simple recompiles, and transparent scalability for its Many Integrated Core (MIC) coprocessors. But according to NVIDIA’s Steve Scott, such promises ignore certain hard truths about how accelerator-based computing really works.
Steve Lionel, aka, “Doctor Fortran” defends the venerable programming language and its modern relevance.
A recent DOE workshop that focused on exascale challenges and current gaps in research and ideology provided food for thought for those seeking a “disruptive” approach to this next level of computing. We highlight a handful of the presentations, delivered by some of the most noteworthy researchers and practitioners in the field.
General Electric is discussing some lessons learned from its Advanced Computing Lab in a series of learning sessions.
Interpreted programming languages usually don’t find too many friends in high performance computing. Yet Python, one of the most popular general-purpose interpreted languages, has garnered a small community of enthusiastic followers. True believers got the opportunity to hear about the language in the HPC realm in a tutorial session on Monday and a BoF session on Wednesday. Argonne National Lab’s William Scullin, who participated in both events, talked with HPCwire about the status of Python in this space and what developers might look forward to.
There is a growing feeling that merely taking the latest processor offerings from Intel, AMD or IBM will not get us to exascale in a reasonable time frame, cost budget, and power constraint. One avenue to explore is designing and building more specialized systems, aimed at the types of problems seen in HPC, or at least at the problems seen in some important subset of HPC. Of course, such a strategy loses the advantages we’ve enjoyed over the past two decades of commoditization in HPC; however, a more special purpose design may be wise, or necessary.