Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Topics » Developer Tools

Features

The Week in HPC Research

Apr 25, 2013 |

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes advancements in petascale-era development environments; balancing performance with power efficiency; optimizing computer science instruction; and a possible path to extreme heterogeneity.

Adapteva Shows Off $99 Supercomputer Boards

Apr 23, 2013 |

Last week, Adapteva revealed the first production units of its $99 Linux “supercomputer.” Speaking at the Linux Collaboration Summit in San Francisco, California, CEO Andreas Olofsson announced the first batch of Parallella final form factor boards will be shipped to the chipmaker’s 6,300 Kickstarter supporters by this summer.

The Week in HPC Research

Apr 11, 2013 |

The top research stories of the week include an evaluation of multi-stage programming with Terra; a look at parallel I/O for multicore architectures; a survey of on-chip monitoring approaches used in multicore SoCs; a review of grid security protocols and architectures; and a discussion of the finer distinctions between HPC and cloud computing.

The Week in HPC Research

Mar 21, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/Cloud_Storage_and_Bioinformatics_in_a_private_cloud_Fig._3_150x.png” alt=”” width=”95″ height=”95″ />The top research stories of the week include an evaluation of sparse matrix multiplication performance on Xeon Phi versus four other architectures; a survey of HPC energy efficiency; performance modeling of OpenMP, MPI and hybrid scientific applications using weak scaling; an exploration of anywhere, anytime cluster monitoring; and a framework for data-intensive cloud storage.

The Week in HPC Research

Mar 7, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/future_insight_200x.jpg” alt=”” width=”100″ height=”58″ />The top research stories of the week include novel methods of data race detection; a comparison of predictive laws; a review of FPGA’s promise; GPU virtualization using PCI Direct pass-through; and an analysis of the Amazon Web Services High-IO platform.

Short Takes

Optimizing Performance in Parallel Programming

Nov 5, 2013 |

Parallel computing refers to the simultaneous use of multiple processing elements to solve a computational problem. Large jobs are segmented into smaller parts, which are then solved concurrently. For most of the history of computing, serial computation was practiced; one instruction set would execute, then the next. Parallel computing arose in response to the constraints Read more…

Reining in Restarts Through Selective Recovery

Nov 4, 2013 |

Computational failures take a steep toll in the HPC sciences. Events such as broken node electronics, software bugs, insufficient hardware resources, and communication faults stymy work on expensive machines and bedevil computer scientists. An article at Deixis Magazine chronicles the work of a Pacific Northwest National Laboratory researcher who is developing load balancing techniques to Read more…

Scientists Prepare Weather Model for GPU-based Systems

Oct 25, 2013 |

When Piz Daint – the Cray supercomputer installed at the Swiss National Supercomputing Center (CSCS) – was first announced, the project leaders cited the benefits for COSMO, an atmospheric model used by the German Meteorological Service, MeteoSwiss and other institutions for their daily weather forecasts. The COSMO model is maintained by the Consortium for Small-scale Read more…

Reprising the 13 Dwarfs of OpenCL

Oct 14, 2013 |

If you’re considering using GPUs to speedup compute-intensive applications, it’s important to understand which algorithms work best with GPUs and other vector-processors. As HPC expert and founder of StreamComputing Vincent Hindriksen puts it, you want to know “what kind of algorithms are faster when using accelerators and OpenCL.” Professor Wu Feng and a team of Read more…

Quantifying Uncertainty at Scale

Oct 9, 2013 |

University of Texas at Austin researcher George Biros has received a $2.85 million grant from the Department of Energy to develop methods for estimating uncertainty in large-scale computer simulations. The project has three main thrusts of particular interest to the DOE: the melting of continental ice sheets in Antarctica; complex fluid flows (such as what Read more…

Off the Wire