<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/knights_corner_small.JPG” alt=”” width=”105″ height=”87″ />As NVIDIA’s upcoming Kepler-grade Tesla GPU prepares to do battle with Intel’s Knight Corner, the companies are busy formulating their respective HPC accelerator stories. While NVIDIA has enjoyed the advantage of actually having products in the field to talk about, Intel has managed to capture the attention of some fence-sitters with assurances of high programmability, simple recompiles, and transparent scalability for its Many Integrated Core (MIC) coprocessors. But according to NVIDIA’s Steve Scott, such promises ignore certain hard truths about how accelerator-based computing really works.
Steve Lionel, aka, “Doctor Fortran” defends the venerable programming language and its modern relevance.
A recent DOE workshop that focused on exascale challenges and current gaps in research and ideology provided food for thought for those seeking a “disruptive” approach to this next level of computing. We highlight a handful of the presentations, delivered by some of the most noteworthy researchers and practitioners in the field.
General Electric is discussing some lessons learned from its Advanced Computing Lab in a series of learning sessions.
Interpreted programming languages usually don’t find too many friends in high performance computing. Yet Python, one of the most popular general-purpose interpreted languages, has garnered a small community of enthusiastic followers. True believers got the opportunity to hear about the language in the HPC realm in a tutorial session on Monday and a BoF session on Wednesday. Argonne National Lab’s William Scullin, who participated in both events, talked with HPCwire about the status of Python in this space and what developers might look forward to.
There is a growing feeling that merely taking the latest processor offerings from Intel, AMD or IBM will not get us to exascale in a reasonable time frame, cost budget, and power constraint. One avenue to explore is designing and building more specialized systems, aimed at the types of problems seen in HPC, or at least at the problems seen in some important subset of HPC. Of course, such a strategy loses the advantages we’ve enjoyed over the past two decades of commoditization in HPC; however, a more special purpose design may be wise, or necessary.
Most of the efforts to address the problem of shrinking transistor geometries have focused on making the devices behave more precisely. But what if instead of trying to make the transistors better, we purposefully try to make them worse. Although it sounds counter-intuitive, developing processors that are naturally error-prone is exactly what one team of researchers has set out to do.
A new, simplified language for programming in cloud environments called “Bloom” is set for release later this year. An interview with one of Bloom’s creators, Joseph Hellerstein of U.C. Berkeley, explains the practical elements.
New generation of HPC programmers embracing higher level languages.
Striking a balance between science and software engineering.