Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: big data

Prioritizing Data in the Age of Exascale

Apr 14, 2014 |

By now, most HPCers and the surrounding community are aware that data movement poses one of the most fundamental challenges to post-petascale computing. Around the world exascale-directed projects are attempting to maximize system speeds while minimizing energy costs. In the US, for example, exascale targets have peak performance increasing by three orders of magnitude while Read more…

DOE Exascale Roadmap Highlights Big Data

Apr 7, 2014 |

If you’ve been following the US exascale roadmap, then chances are you’ve been following the work of William (“Bill”) J. Harrod, Division Director for the Advanced Scientific Computing Research (ASCR), Office of Science with the US Department of Energy (DOE). In January, Harrod asserted that the DOE’s mission to push the frontiers of science and Read more…

How NASA Is Meeting the Big Data Challenge

Apr 7, 2014 |

As the scientific community pushes past petaflop into exascale territory, it is imperative that the tools to support ever-more data-intensive workloads keep pace. No where is this more true than at the storied NASA research complex. With 100 active missions supporting cutting-edge science, NASA knows more than most about compute- and data-driven challenges. A recent paper Read more…

Big Data Reaches to the Stratosphere

Apr 3, 2014 |

Among the many compelling papers to come out of the Big Data and Extreme-scale Computing (BDEC) workshop, held in Fukuoka, Japan, in February, was a position paper from Dr. Volker Markl, full professor and chair of the Database Systems and Information Management (DIMA) group at the Technische Universität Berlin (TU Berlin) detailing the benefits of Read more…

Instrument Science Preps for Exascale Era

Apr 3, 2014 |

The Big Data and Extreme-scale Computing (BDEC) workshop that took place in February in Fukuoka, Japan, brought together luminaries from industry, academia and government to discuss today’s big data challenges in the context of extreme-scale computing. Attendees to this invitation-only event include some of the world’s foremost experts on algorithms, computer system architecture, operating systems, Read more…

Cracking the Silos of Custom Workflows

Feb 27, 2014 |

In high performance computing, the time-honored concept of creating tailored workflows to address complex requirements is nothing new. However, with the advent of new tools to analyze and process data—not to mention store, sort and manage it—traditional ways of thinking about HPC workflows are falling by the wayside in favor of new approaches that might Read more…

India Rides Tech Convergence Wave Into 2014

Feb 10, 2014 |

Industry and government leaders in India are anticipating an even stronger focus on cloud computing, HPC and big data technologies over the next year as businesses seek out more efficient and cost-effective ways to drive productivity and profit. According to Intel South Asia Director Sales Suryanarayanan B, India will have more supercomputers listed on the TOP500 and increased Read more…

Cray Advances Hadoop for HPC

Feb 4, 2014 |

In a recent blog entry, Mike Boros, Hadoop Product Marketing Manager at Cray, Inc., writes about the company’s positioning of Hadoop for scientific big data. Like the old adage, “when the only tool you have is a hammer, every problem begins to resemble a nail,” Boros suggests that the Law of the Instrument may be true Read more…

‘Edison’ Lights Up Research at NERSC

Jan 31, 2014 |

The National Energy Research Scientific Computing (NERSC) Center, located at Lawrence Berkeley National Laboratory, has taken acceptance of “Edison,” a Cray XC30 supercomputer named in honor of famed American inventor Thomas Alva Edison. The important milestone occurs just as NERSC is commemorating 40 years of scientific advances, prompting NERSC Director Sudip Dosanjh to comment: “As Read more…

HPC Lessons for the Wider Enterprise World

Jan 28, 2014 |

Is HPC so specialized that the lessons learned from large-scale infrastructure (at all layers) are not transferrable to mirrored challenges in large-scale enterprise settings? Put another way, are the business-critical problems that companies tackle really so vastly different than the associated hardware and software issues that large supercomputing centers have already faced and in many Read more…