Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Sectors » Academia & Research

Features

Earthquake Simulation Hits Petascale Milestone

Apr 15, 2014 |

German researchers are helping to push back the goalposts on large-scale simulation. Using the the IBM “SuperMUC” high performance computer at the Leibniz Supercomputing Center (LRZ), a cross-disciplinary team of computer scientists, mathematicians and geophysicists successfully scaled an earthquake simulation to more than one petaflop/s, i.e., one quadrillion floating point operations per second. The collaboration included participants Read more…

Why Iterative Innovation is the Only Path to Exascale

Apr 14, 2014 |

If we’re out of “magic bullets” that can shoot across supercomputing space, shattering assumptions about how high performance computing operates efficiently at massive scale, we’re left with one option…refine and tweak that which exists, while pushing as much funding as possible toward the blue sky above with the hopes that another disruptive technology will emerge. Read more…

NVIDIA Highlights GPU Progress on Titan Supercomputer

Mar 27, 2014 |

The GPU Technology Conference this week in San Jose offered plenty of material for the supercomputing set with a number of presentations focused on specific programming challenges for large-scale scientific and enterprise HPC applications. The Titan system at Oak Ridge National Lab tied together key themes through a number of the talks, which helped put Read more…

Swiss Hybrid Petaflopper Opens for Research

Mar 24, 2014 |

During the 2013 NVIDIA GPU Technology conference, the Swiss National Supercomputing Center (CSCS) revealed that its Cray XC30 “Piz Daint” supercomputer was on track to becoming Europe’s fastest GPU-accelerated number-cruncher, and the first Cray machine to be equipped with Intel Xeon processors and NVIDA GPUs. Now, one year later, the revved-up Piz Daint is officially Read more…

A Blueprint for Centralized Research Data Storage and Sharing

Mar 3, 2014 |

The University of Colorado Boulder PetaLibrary storage system was recently deployed by the CU Research Computing (RC) group to address the increasing challenges that researchers face regarding large-scale data storage and data management. The PetaLibrary, in part funded by the National Science Foundation, provides a variety of services to campus researchers including high-performance short-term storage, Read more…

Short Takes

Russian-Bred Supercomputer in the Works

Apr 11, 2014 |

Russia is said to be developing a home-grown supercomputer for military-industrial applications, according to a report in Prensa Latina. Ruselectronics CEO Andrei Zverev revealed that the state-sponsored electronics holdings company is coordinating with the Ministry of Industry and Commerce to create a 1.2 petaflop computer to serve the needs of the Russian defense industry. “All of Read more…

Viglen Gives UK Science Facility JASMIN £4 Million Makeover

Apr 10, 2014 |

British systems integrator Viglen has won a £4 million contract to outfit JASMIN, a UK-based environmental scientific data analysis and simulation facility, with petascale-level data processing and storage capabilities. The contract calls for the design, supply and installation of a turnkey integrated HPC computing, storage and network solution at the site, which is run by Read more…

Leading Edge Versus Bleeding Edge

Apr 10, 2014 |

Enterprises are always looking for an edge to use against the competition, and information technology was created initially and specifically to be that edge. Decades later, computing in its various forms is the foundation of the modern corporation, and companies are still looking for new ways of gaining an advantage. More times than not, that Read more…

HPC ‘App’ for Industry Stresses Ease of Use

Apr 8, 2014 |

One of the main enterprise uses for high performance computing (HPC) is to bring product designs to market faster via a process known as rapid prototyping. This week three popular companies – Unilever, Syngenta and Infineum – have partnered with the HPC facilities at the Science and Technology Facilities Council’s (STFC’s) Hartree Centre, drawn by Read more…

Data Management in Times of Disaster

Apr 4, 2014 |

When natural disaster strikes – be it a flood, an earthquake or a tsunami – every second counts. Just as emergency teams must be ready to go in a moment’s notice so must critical data management systems. This important topic, an essential element of civil protection around the world, is the focus of a research paper, Read more…

Off the Wire

ORNL’s John Wagner Receives E.O. Lawrence Award

Apr 16, 2014 |

OAK RIDGE, Tenn., April 16 — Oak Ridge National Laboratory researcher John Wagner has been named a 2013 recipient of the Department of Energy’s Ernest Orlando Lawrence Award for his work in advancing computer, information and knowledge sciences. Wagner, a nuclear engineer who serves as national technical director for DOE’s Nuclear Fuels Storage and Transportation Read more…

SC14 Submissions for Panels Due April 25

Apr 16, 2014 |

April 16 — Submit your panel proposals by Friday, April 25. As one of the most important and heavily attended events of SC, your panel should include lively and rapid-fire content with challenging questions related to high performance computing, networking, storage and associated analysis technologies for the foreseeable future. Panels offer a rare opportunity for Read more…

NCI to Utilize Mellanox CloudX Interconnect

Apr 15, 2014 |

SUNNYVALE, Calif. & YOKNEAM, Israel, April 15 – Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that the National Computational Infrastructure (NCI), hosted at the Australia National University, selected Mellanox’s interconnect to support Australia’s national research computing service which provides world-class, high-end services to Read more…

Cluster 2014 Set for September

Apr 15, 2014 |

April 15 — Clusters have become the workhorse for computational science and engineering research, powering innovation and discovery that advance science and society. They are the base for building today’s rapidly evolving cloud and HPC infrastructures, and are used to solve some of the most complex problems. The challenge to make them scalable, efficient, and Read more…

SDSC Enables Large-Scale Data Sharing Using Globus

Apr 14, 2014 |

April 14 — The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, has implemented a new feature of the Globus software that will allow researchers using the Center’s computational and storage resources to easily and securely access and share large data sets with colleagues. In the era of “Big Data”-based science, accessing and Read more…