Companies that use Quantum’s StorNext platform to store massive amounts of data this week got a glimpse of new storage capabilities that should make it easier to access their data horde from anywhere in the world.
StorNext is scale-out storage platform that combines a parallel, shared-disk file system – or what Quantum sometimes called a “streaming file system” – with a data management layer to automate many administrative tasks.
Originally created to provide fast data transfer between Windows and SGI IRIX workstations, the platform today supports an array of protocols, including Fibre Channel, InfiniBand, iSCSI and Ethernet. It can front-end large and sophisticated storage area networks (SAN) clusters or work with individual network attached storage (NAS) devices that individually use NFS or CIFS file systems.
The platform supports a variety of storage mediums, including flash, spinning disk, tape, and cloud repositories, and is used extensively in the media and entertainment, oil and gas, genomics, and surveillance industries, where large file sizes and high-performance demands thwart simpler storage approaches.
Quantum, which acquired StorNext in 2006 from Advanced Digital Information Corporation, this week unveiled version 6 of the platform. Key new data storage and management capabilities are delivered via the new FlexSync and FlexSpace features.
FlexSync provides a way to synchronize data between multiple StorNext systems in an automated fashion. The feature leave ages StorNext’s existing metadata monitoring capabilities to immediately recognize when a file is changed, and replicate that change to other systems, even if the file is in use or locked.
Customers can set FlexSync up in a number of configurations, including one-to-one, one-to-many, and many-to-one file scenarios, the company says. They can also create policies that automatically trigger file replication tasks based on various conditions, thereby ensuring that stakeholders get the freshest data possible, no matter where they’re located.
Meanwhile, the new FlexSpace feature gives globally distributed teams fast access to a centrally located copy of data. The feature ensures that multiple instances of StorNext located anywhere in the world can access a single archive repository. “Users at different sites can store files in the shared archive, as well as browse and pull data from the repository,” Quantum says.
FlexSpace also supports public cloud object stores like AWS S3, Microsoft Azure, and Google Cloud via the FlexTier capability that Quantum unveiled in StorNext version 5.6 late last year. Users can also use FlexSpace to access their own private cloud object stores, including ones based on Quantum’s own Lattus object storage, as well as third-party object stores like NetApp StorageGRID, IBM Cleversafe, and Scality RING.
Molly Presley (Rector), who joined Quantum last fall as its new vice president of global marketing, says the StorNext enhancements deliver benefits where traditional NAS and general-purpose, scale-out storage offerings for unstructured data fall short. “We designed StorNext 6 to give businesses and other organizations the ability to interact with their data in new ways and thereby drive greater creativity, productivity, innovation and insight,” she says in a press release.
Version 6 also brings a new quality of service (QoS) feature that lets users tune the performance of the storage repositories on a machine-by-machine basis, the company says. This can help assure that workstations that are hungry for storage bandwidth can get the data they need, while scaling back the bandwidth usage of less-critical applications.
Other new features in StorNext 6 include:
- a new copy expiration feature that automatically removes file copies from more expensive storage tiers;
- a selectable retrieve function that dictates the order of retrieval of remaining copies;
- more efficient tracking of changes in files.
Quantum plans to ship StorNext 6 on its Xcellis, StorNext M-Series, and Artico archive appliances. General availability is expected this summer.