Delivering Results Built on Trust and Choice—for Big Data

December 16, 2013

Not all data is created equal.  The value of data may not be known for years to come.  A geographic survey from the past could yield information on prospective reservoirs.   An abandoned jet engine design may provide useful insights in jet propulsion. The list of examples is endless.

What is common across nearly all data sets is that the underlying storage platform requirements change over time. Users typically need storage performance early in the data lifespan, within the first 90 days.  As data sets age, organizations need an easy, efficient, and user-transparent way to move data to lower cost storage, such as tape.   When storage systems need to be upgraded (typically every 2-7 years), organizations need an easy, non-disruptive path to new systems.

Cray now offers a way to do just this.  The new system– Cray Tiered Adaptive Storage (TAS)–lets customers preserve data indefinitely, keep data continuously accessible to users and applications during migration, and upgrade the storage infrastructure for years to come, as needed.

In cases where customers need fast parallel storage, such as scratch, Cray offers Sonexion.  Cray Sonexion reduces complexity, and scales performance and capacity together.  Cray Sonexion consolidates Lustre in a compact, appliance-like form factor that interoperates with any popular Linux cluster.

Storage Solutions for x86 Linux

Got Linux?  Think Cray for storage.  All products and services provided by Cray’s data storage business for Big Data connect to Linux.  Cray Cluster Connect (C3) for Lustre offers dependable, interoperable storage solutions for x86 Linux.  Cray ships a Lustre Client for x86 Linux that comes packaged with C3.  Cray optimizes Lustre across the entire data path from the Linux client down to the storage array.  Customers choose the storage platform—DDN, NetApp E-Series, or Cray Sonexion—and Cray delivers and end-to-end storage solution that performs optimally.

Cray supports the entire solution from the Linux client down to the disks.  What’s unique about C3 is the flexibility of the storage architecture.  Characterizing how certain applications perform –and optimizing the entire I/O path from the client to disk—ensure the system scales optimally as needed.  Cray’s expertise spans the entire stack—applications, compute, networking, and storage.   Cray provides a single point of support, including all software and hardware, for multi-vendor storage solutions.  Most storage vendors have developed expertise in a single area, such as block storage.

Cray holds the world record for delivering the world’s fastest in-production single Lustre file system at NCSA Blue Waters.  Cray scales large sequential I/O performance from 5GB/s to 1TB/s in a single file system.

All of Cray’s storage solutions share a common benefit:  delivering results based on customer requirements. Cray TAS is delivered pre-configured, deployment ready, and connects to industry-standard file sharing protocols like Lustre and NFS.

Understanding Parallel File Systems

For organizations investigating parallel file systems, Cray makes an ideal partner.  There are many choices and decisions relating to software and hardware.  Networking, file systems, and storage may be the most challenging for some organizations.

Where does Lustre fit?  Should GPFS or NFS be deployed?  Often, Lustre and other parallel file systems such as GPFS complement NAS and SAN deployments.  Cray even has a way of virtualizing parallel file systems and NFS to maximize parallel access into Cray Supercomputers.  This unique offering—Data Virtualization Services—comes included with the XC-line of Supercomputers running Cray Linux Environment (CLE).

Lustre is Cray’s native parallel file system of choice.  Over two-thirds of the world’s fastest Supercomputers are powered by Lustre.  As a co-founder of OpenSFS, a consortium dedicated to advancing open scalable file systems, Cray collaborates with industry partners and customers to advance Lustre.

Is Lustre ready for the enterprise and commercial HPC?  Cray recently published a paper describing the decisions and considerations of using Lustre in commercial HPC and enterprises, where reliability is critical.    Making the Business Case for Lustre is available for download on the Cray website.

Tiered Storage for Big Data and Large-scale Archiving

Cray TAS is ideal for customers requiring HSM-style simplicity using open source technologies and best of breed, multi-vendor storage technologies.  Cray’s solution provides a flexible tiering model where customers can choose the media type—SSD, disk, and tape, in various combinations.  Cray TAS abstracts file systems into a common storage so all data can be migrated bi-directionally between fast file systems like Lustre, across primary storage (often connected by NFS), and deep archives, usually tape-based.

In a strategic partnership with Versity, co-founded by Harriet Coverston, Cray has produced a complete tiered storage solution integrating SSD, disk, and tape.  Cray builds on Versity’s open format HSM and storage virtualization software engine for Linux, and provides everything a customer needs to get up and going with an end-to-end archiving solution.  This includes best practices, data migration services, and sample templates and policies.  Customers classify data any number of ways, and maintain continuous access to data across its lifespan.

Use cases for TAS range from commercial enterprises supporting compliance and large-scale data archiving initiatives to tiered storage to digital libraries with massive archiving requirements.  The need for Cray TAS could be driven by massive data growth (e.g., files and videos) or any number of company-specific data preservation requirements.

Cray develops all its solutions—from XC30 Supercomputers to TAS—with the future in mind.  It’s essential to be able to upgrade systems over time to take advantage of the latest innovations.

As Seymour Cray once said, “the future is seldom the same as the past”.

Your Trusted Expert is Cray.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

Topological Quantum Superconductor Progress Reported

February 20, 2018

Overcoming sensitivity to decoherence is a persistent stumbling block in efforts to build effective quantum computers. Now, a group of researchers from Chalmers University of Technology (Sweden) report progress in devisi Read more…

By John Russell

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This