Topics » Developer Tools

Features

Tracking the Trajectory to Exascale and Beyond

Jun 8, 2015 |

The future of high performance computing is now being defined both in how it will be achieved and in the ways in which it will impact diverse fields in science and technology, industry and commerce, and security and society. At this time there is great expectation but much uncertainty creating a climate of opportunity, challenge, Read more…

Read more…

Jetstream: Targeting the Long Tail of Science

May 21, 2015 |

Improving HPC access to the so-called long tail of science is an ongoing NSF priority. Several initiatives, funded at least in part by NSF, are underway and one – Jetstream – is the first NSF-funded HPC Cloud targeted directly at domain scientists and engineers who typically have limited access to HPC resources and limited expertise. Read more…

Read more…

Application Readiness at the DOE, Part III: Argonne Ramps Up Early Science Program

Apr 20, 2015 |

In our third and final video from the HPC User Forum panel (The Who-What-When of Getting Applications Ready to Run On, And Across, Office of Science Next-Gen Leadership Computing Systems), Tim Williams dives into the Early Science Program (ESP) being ramped up at the Argonne Leadership Computing Facility (ALCF). Williams, the ESP manager, noted the current Read more…

Read more…

Application Readiness at the DOE, Part II: NERSC Preps for Cori

Apr 17, 2015 |

In our second video feature from the HPC User Forum panel, “The Who-What-When of Getting Applications Ready to Run On, And Across, Office of Science Next-Gen Leadership Computing Systems,” we learn more about the goals and challenges associated with getting science applications ready for the coming crop of Department of Energy (DOE) supercomputers, which in addition to being five-to-seven times faster than Read more…

Read more…

Application Readiness at the DOE, Part I: Oak Ridge Advances Toward Summit

Apr 16, 2015 |

At the 56th HPC User Forum, hosted by IDC in Norfolk, Va., this week, three panelists from major government labs discussed how they are getting science applications ready for the coming crop of Department of Energy (DOE) supercomputers, which in addition to being five-to-seven times faster than today’s fastest big iron machines, constitute significant architectural changes. Titled “The Who-What-When of Getting Applications Ready to Read more…

Read more…

Short Takes

DARPA Targets Autocomplete for Programmers

Nov 6, 2014 |

If Rice University computer scientists have their way, writing computer software could become as easy as searching the Internet. Two dozen computer scientists from Rice, the University of Texas-Austin, the University of Wisconsin-Madison and the company GrammaTech have joined forces to turn this promise into a reality. With $11 million in DARPA-funding, the group will Read more…

Read more…

ARL Researchers Win Software Design Contest

Oct 16, 2014 |

Researchers from the U.S. Army Research Laboratory’s Computational and Information Sciences Directorate (CISD) – David Richie and James Ross – won first place in an international software contest for their work on emulators. Their submission, Cycle-Accurate 8080 Emulation Using an ARM11 Processor with Dynamic Binary Translation, addresses some of the programming challenges of next generation Read more…

Read more…

NSF Promotes Data Science with $31M Award

Oct 1, 2014 |

The National Science Foundation (NSF) announced today some $31 million in awards for 17 innovative projects geared toward the promotion of data science and a robust data infrastructure. The National Science Foundation (NSF) seeks to improve the nation’s capacity in data science by investing in the development of infrastructure, making it easer to use data, Read more…

Read more…

Deconstructing Moore’s Law’s Limits

Aug 18, 2014 |

For the past five decades, computers have progressed on a predictable trajectory, doubling in speed roughly every two years in tune with Gordon Moore’s oft-cited observation-turned-prophecy. Although semiconductor scaling continues to yield performance gains, many perceive a tipping point is nigh, where the cost-benefit analysis of further miniaturization breaks down. The latest researcher to weigh Read more…

Read more…

Parallel Programming with OpenMP

Jul 31, 2014 |

One of the most important tools in the HPC programmer’s toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The current release, version 4.0, came out last November. In a recent video, Oracle’s OpenMP committee representative Nawal Copty explores some of the tool’s features and common pitfalls. Copty explains Read more…

Read more…

Off the Wire