Visit additional Tabor Communication Publications
December 15, 2006
Hitachi Data Systems Corporation, has announced major moves that it claims will enable its rapid expansion into the global high-performance computing storage market. These moves, backed by an investment in BlueArc Corporation, a network storage provider, include a five-year worldwide OEM agreement and the immediate availability of the Hitachi High-performance NAS Platform.
"Today's announcement signifies a strategic expansion of our addressable market, enabling us, together with our channel partners across the globe, to bring powerful, reliable, and proven file-intensive storage technology to customers in the high- performance computing market and beyond," said Dave Roberson, president and CEO, Hitachi Data Systems. "Channel partners can use our new HPC storage offerings as the foundation to which they can add vertical market expertise, to provide custom solutions for their clients. Today's announcement is another impressive result of Hitachi Data Systems' increased investment and focus on emerging markets."
"This is another step in what I refer to as the 'new' Hitachi Data Systems. Less than two years ago, I used to say they were a great SAN storage company," said Tony Asaro, senior analyst, Enterprise Strategy Group. But now it is clear that they are focused on being a great storage networking company with a more comprehensive portfolio of products that range from SAN, storage management software, digital archiving, VTL and NAS. It is really important for customers to have other leading vendors provide NAS solutions so that they have more options open to them."
"BlueArc's leading file-based virtualization technology is a perfect complement to Hitachi's industry-leading block-based virtualization solutions," said Mike Gustafson, president and CEO, BlueArc. "This partnership will help expand BlueArc's presence on a global basis and will extend Titan's reach beyond the key vertical markets we serve today to major enterprises and horizontal applications across the world, giving customers the better alternative for high performance network storage they have been seeking."
Many network attached storage solutions today are designed to store and archive data that is rarely or never accessed. For Internet services applications, electronic discovery applications, life sciences, oil and gas exploration, and entertainment applications, conventional NAS storage systems often do not have the performance and scalability needed to manage the intensive file processing requirements that are critical to these vertical industries.
Headquartered in Columbus, Indiana, (USA), Cummins Inc. designs, manufactures, distributes and services engines and related technologies, including fuel systems, controls, air handling, filtration, emission solutions and electrical power generation systems.
"As a design and manufacturing company, it is critical to have high-performance solutions that can scale as our business grows," said Curt Brown, Storage Technology Director, Cummins Inc. "We were reaching our limit with our current NAS solution. The Hitachi High-performance NAS Platform provides the necessary scalability and performance we require. The Hitachi platform exceeds our requirements today and offers capacity and connectivity to scale as we grow."
"With this announcement, we are introducing the most advanced file and block virtualization system today," said John Mansfield, vice president, Product Management, Hitachi Data Systems. "There is tremendous opportunity for the Hitachi High-performance NAS Platform with our installed base. Our customers have been anxious for us to do for their file storage systems what we have successfully done for their block storage systems with our Universal Storage Platform. We can now help them consolidate and virtualize a tiered storage block AND file-based environment."
Hitachi claims the new High-performance NAS Platform offers more performance, capacity, file system size, and snapshot replicas than any comparative file-based storage offering. Delivering up to 6 times the real-world performance (600K IOPS), over 4 times the capacity (512TB), 16 times the file system size (256 TB), 4 times more snapshots per file system (1,024)--and features such as the ability to perform data classification and hierarchical storage management and offer Transparent Data Migration -- the Hitachi says its High-performance NAS Platform effectively eclipses EMC's Celerra/NSX and NetApp's FAS and V series products.
"There is increasing demand from a growing number of industry-leading, business application users for the delivery of information to decision makers in real or near-real time," said John Webster, Principal IT Advisor of Illuminata, Inc. "Through its partnership with BlueArc, Hitachi is further expanding its leadership in storage virtualization and further establishing a growth position in both traditional and emerging high-performance computing opportunities."
According to Hitachi, its High-performance NAS Platform's capacity allows for fewer nodes and lower maintenance costs. The platform's single logical storage pool of up to 512 terabytes eliminates the need to break up large data sets and its file virtualization capabilities enable automatic growth of file systems. When research projects require collaboration, the platform helps avoid duplicate efforts by promoting information sharing among researchers via fast, secure access to a central pool of files and databases that can scale up to 4 million files per directory. The Hitachi High-performance NAS Platform also delivers a cluster name space, which provides a single unified name space to users for both CIFS and NFS, concurrently, giving administrators a single mount point for users.
"We believe advanced file virtualization capabilities that permit automatic growth of file systems will gain the most traction," states Carl Greiner, senior vice president and analyst, Ovum. "However, the vendor that can deliver a single unified cluster name space for both CIFS and NFS will provide the most robust and functionally rich implementation. Successful IT organisations will begin to put unified storage infrastructure virtualisation on their short list of things required to insure flexibility and agility to meet today's ever challenging business requirements."
"Certain applications within high-performance computing, entertainment, and life sciences, have unique characteristics requiring uniquely-optimized solutions," said John McArthur, group vice president and general manager, Information Infrastructure and Enabling Technologies at IDC. "BlueArc has been addressing those requirements with the Titan product line. The partnership with BlueArc expands HDS' portfolio of storage solutions and gives the company improved access to these higher-growth market opportunities."
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.