May 21, 2013
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC). Read more…
Did you miss out on Supercomputing 2017? Did you attend, but were stuck in meetings the whole time without an opportunity to walk the show floor and see what new announcements were being made? HPCwire's got you covered, we visited some of the hottest booths in the exhibit hall and spoke with their top executives to get the scoop on the latest solutions, partnerships, and product announcements.
From mismatches between compute and storage capabilities to colossal data volumes, data storage presents a number of challenges for scientific research. And as silos pop up and challenges expand, the pace of research often suffers.
As genomic data becomes ubiquitous, infrastructure bottlenecks for life sciences organizations are narrowing. But speedy analysis and real-time decision making don't have to remain out of reach: modern end-to-end systems are emerging as flexible solutions for a competitive edge.
© HPCwire. All Rights Reserved. A Tabor Communications Publication
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.