Tag: hpc storage
These days storage is in the HPC spotlight. How well an HPC application performs relies not only on total system memory bandwidth and sustained floating point operations per second, but also on a storage architecture that supports sufficient throughput to handle constantly increasing amounts of data. The key to storage performance is not capacity – Read more…
With its recent acquisition of Xyratex and EVault, Seagate brings decades of experience as a storage solutions leader to bear on the fast growing field of high performance computing (HPC). The company has created an Intelligent Information Infrastructure system to help organizations involved with HPC manage today’s massive growth of digital data and cope with Read more…
Until relatively recently, HPC storage systems have been almost an afterthought, a grab bag mix of components jury rigged together to support the star of the show – a supercomputer or a large compute cluster. A typical legacy HPC storage solution was made up of commercially available RAID arrays, network filers, or direct attached disks. Read more…
For the past few decades, the norm among the large government labs, academic research facilities and top commercial sites has been to deploy one large system per site at a time. However, more recently growing diversity of applications and end user community requirements, combined with non-overlapping budget and expanding technology lifecycles, has been driving a multi-cluster environment approach.
Being competitive in today’s economy means companies need to accelerate the time it takes to go from concepts to profitable products and services. There is no shortage of new services, novel methods and innovations to help solve the problems we face; yet, to affect real change, faster market solutions need to be pragmatic and affordable.
Is your current HPC data storage solution experiencing issues with disk drives? Are you seeing performance degradation, where HPC projects take longer to complete than they should? Is your performance situation normal, or are there reliable alternatives to achieving sustained performance at large HPC scale?
Successful oil and gas exploration today requires ever-faster upstream processing. To shorten the compute time needed to get actionable information, organizations need to reduce survey processing run times from months to weeks and be capable of scaling to handle the explosive data growth.
Gone are the days when the architects of High Performance Computing (HPC) environments can treat the collection of servers, storage, networking and file systems as a science project for experiments, frequent failures, adjustments and course corrections.
In most industries today, (whether it is financial services, manufacturing, academic research, healthcare and life sciences, or energy exploration) data analysis, modeling, and visualization efforts are critical to success.
To gain a competitive edge, most organizations are incorporating ever-large data sets and more variable data formats into these computational workflows to help derive better information upon which to make smarter decisions.
The University of California San Diego (UCSD) and Yale University have been awarded an NSF grant to build a neuroscience gateway.