Topics » Developer Tools

Is US Falling Behind in Supercomputing and Exascale?

Jan 29, 2015 |

Few dispute the importance of supercomputing to U.S. competitiveness. The argument is around whether current government efforts – primarily through the Advanced Scientific Computing Research (ASCR) program within the U.S. Department of Energy (DOE) – are effective and sufficient or wasteful and excessive. Yesterday, a panel of HPC experts testifying at a U.S. House of Read more…

EMSL Named an Intel Parallel Computing Center

Jan 28, 2015 |

Jan. 28 — Intel has named EMSL, located at Pacific Northwest National Laboratory, as an Intel Parallel Computing Center. As an Intel PCC, EMSL’s scientific computing team will work with Intel to modernize the codes of NWChem to take advantage of technological advancements in computers. NWChem is one of the Department of Energy’s premier open-source computational Read more…

Compilers and More: Is Amdahl’s Law Still Relevant?

Jan 22, 2015 |

From time to time, you will read an article or hear a presentation that states that some new architectural feature or some new programming strategy will let you work around the limits imposed by Amdahl’s Law. I think it’s time to finally shut down the discussion of Amdahl’s Law. Here I argue that the premise Read more…

Helping Experimental Scientists Take Supercomputers to the Max

Dec 30, 2014 |

Doug Baxter is a capability lead for the Molecular Science Computing Facility in the Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory. He and his team are responsible for the software side of the operation, and they help experimental scientists get the most out of EMSL’s supercomputing resources. The facility is the home Read more…

Combustion Simulation in the Exascale Era

Nov 13, 2014 |

One of the many excellent sessions at SC14 will address how the twin technologies of HPC and big data are coalescing to enable major scientific breakthroughs in the field of turbulent combustion. As part of the SC14 Technical Program, Jacqueline H. Chen, a Distinguished Member of Technical Staff at the Combustion Research Facility at Sandia National Laboratories, Read more…

DARPA Targets Autocomplete for Programmers

Nov 6, 2014 |

If Rice University computer scientists have their way, writing computer software could become as easy as searching the Internet. Two dozen computer scientists from Rice, the University of Texas-Austin, the University of Wisconsin-Madison and the company GrammaTech have joined forces to turn this promise into a reality. With $11 million in DARPA-funding, the group will Read more…

The Exascale Revolution

Oct 23, 2014 |

The post-petascale era is marked by systems with far greater parallelism and architectural complexity. Failing some game-changing innovation, crossing the next 1000x performance barrier will be more challenging than previous efforts. At the 2014 Argonne National Laboratory Training Program on Extreme Scale Computing (ATPESC), held in August, Professor Pete Beckman delivered a talk on “Exascale Architecture Trends” and their impact on the programming and executing of computational Read more…

ARL Researchers Win Software Design Contest

Oct 16, 2014 |

Researchers from the U.S. Army Research Laboratory’s Computational and Information Sciences Directorate (CISD) – David Richie and James Ross – won first place in an international software contest for their work on emulators. Their submission, Cycle-Accurate 8080 Emulation Using an ARM11 Processor with Dynamic Binary Translation, addresses some of the programming challenges of next generation Read more…

NSF Promotes Data Science with $31M Award

Oct 1, 2014 |

The National Science Foundation (NSF) announced today some $31 million in awards for 17 innovative projects geared toward the promotion of data science and a robust data infrastructure. The National Science Foundation (NSF) seeks to improve the nation’s capacity in data science by investing in the development of infrastructure, making it easer to use data, Read more…

New Degrees of Parallelism, Old Programming Planes

Aug 28, 2014 |

Exploiting the capabilities of HPC hardware is now more a matter of pushing into deeper levels of parallelism versus adding more cores or overclocking. What this means is that the time is right for a revolution in programming. The question is whether that revolution should be one that torches the landscape or that handles things Read more…

Deconstructing Moore’s Law’s Limits

Aug 18, 2014 |

For the past five decades, computers have progressed on a predictable trajectory, doubling in speed roughly every two years in tune with Gordon Moore’s oft-cited observation-turned-prophecy. Although semiconductor scaling continues to yield performance gains, many perceive a tipping point is nigh, where the cost-benefit analysis of further miniaturization breaks down. The latest researcher to weigh Read more…

NERSC Launches Exascale Readiness Program with Intel, Cray

Aug 11, 2014 |

The National Energy Research Scientific Computing Center (NERSC) is collaborating with supercomputing vendors Intel and Cray to prepare for Cori, the Cray XC supercomputer scheduled to be deployed at NERSC in 2016. Named in honor of American biochemist Gerty Cori, the next-generation supercomputer will have a sustained performance that is at least ten times that Read more…

Parallel Programming with OpenMP

Jul 31, 2014 |

One of the most important tools in the HPC programmer’s toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The current release, version 4.0, came out last November. In a recent video, Oracle’s OpenMP committee representative Nawal Copty explores some of the tool’s features and common pitfalls. Copty explains Read more…

Building Parallel Code with Hybrid Fortran

Jul 31, 2014 |

Over at the Typhoon Computing blog, Michel Müller addresses a topic that is top of mind to many HPC programmers: porting code to accelerators. Fortran programmers porting their code to GPGPUs (general purpose graphics processing units) have a new tool at their disposal, called Hybrid Fortran. Müller shows how this open source framework can enhance portability without sacrificing performance and maintainability. From the blog (editor’s note: the site Read more…

The Portability Mandate

Jul 24, 2014 |

Argonne National Laboratory recently published several sessions from its Summer 2013 Extreme-Scale Computing program to YouTube. One of these is a lesson on combining performance and portability presented by Argonne Assistant Computational Scientist Jeff Hammond. For some reason the video image does not match the lecture, but you will find a link to Hammond’s slide deck here. Read more…

Parallel Computing Trends

Jul 22, 2014 |

One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…

Exascale Resilience Turns a Corner

Jul 21, 2014 |

While advancing the field of HPC into the exascale era is beset by many obstacles, resiliency might be the most thorny of all. As the number of cores proliferate so too do the number of incorrect behaviors, threatening not just the operation of the machine, but the validity of the results as well. When you Read more…

The Case for a Parallel Programming Alternative

Jul 2, 2014 |

Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems. To explain Read more…

Programmability Matters

Jun 30, 2014 |

While discussions of HPC architectures have long centered on performance gains, that is not the only measure of success, according to Petteri Laakso of Vector Fabrics. Spurred by ever-proliferating core counts, programmability is taking on new prominence. Vector Fabrics is a Netherlands-based company that specializes in multicore software parallelization tools, so programmability is high on Read more…

An Easier, Faster Programming Language?

Jun 18, 2014 |

The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can Read more…