There are many important issues when it comes to advancing the field of HPC toward the exascale era, but among all these variables, there are about five or so sticking points that really stand-out: one of these is controlling for soft errors. As the number of cores per machine increases, incorrect behaviors, known as soft Read more…
HPC programmers who are tired of managing low-level details when using OpenCL or CUDA to write general purpose applications for GPUs (GPGPU) may be interested in Harlan, a new declarative programming language designed to mask the complexity and eliminate errors common in GPGPU application development.
In this week’s hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.
A giant leap in bone structure research paves the way for advances in osteoporosis treatment; details from UCSD’s Research CyberInfrastructure (RCI) Program reveal what PIs really want; and a cloud computing programming model puts the focus on predictable performance. Plus GPU-related research and more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/knights_corner_small.JPG” alt=”” width=”105″ height=”87″ />As NVIDIA’s upcoming Kepler-grade Tesla GPU prepares to do battle with Intel’s Knight Corner, the companies are busy formulating their respective HPC accelerator stories. While NVIDIA has enjoyed the advantage of actually having products in the field to talk about, Intel has managed to capture the attention of some fence-sitters with assurances of high programmability, simple recompiles, and transparent scalability for its Many Integrated Core (MIC) coprocessors. But according to NVIDIA’s Steve Scott, such promises ignore certain hard truths about how accelerator-based computing really works.
Steve Lionel, aka, “Doctor Fortran” defends the venerable programming language and its modern relevance.
A recent DOE workshop that focused on exascale challenges and current gaps in research and ideology provided food for thought for those seeking a “disruptive” approach to this next level of computing. We highlight a handful of the presentations, delivered by some of the most noteworthy researchers and practitioners in the field.
General Electric is discussing some lessons learned from its Advanced Computing Lab in a series of learning sessions.