Visit additional Tabor Communication Publications
December 14, 2007
SAN JOSE, Calif., Dec. 12 -- BlueArc Corporation, a leader in scalable, high-performance unified network storage, today announced that Brookhaven National Lab (Brookhaven), a multi-program laboratory operated for the U.S. Department of Energy, has deployed a BlueArc Titan 2200 cluster with nearly 300 terabytes of storage. The BlueArc Titan system serves as a massively scalable and reliable foundation for the fastest available access to data resulting from research today -- and in the future.
"We can't afford to experiment when it comes to storage infrastructure," said Robert Petkus, RHIC/USATLAS Computing Facility, Brookhaven National Laboratory. "As Brookhaven prepares to support some of the world's most important particle physics research next year, we've replaced cutting-edge but inadequate systems with BlueArc Titan 2200 servers that can scale effortlessly and respond consistently to shifts in volume and demand."
Approximately 3,000 scientists, engineers, technicians and support staff and another 4,000 or more guest researchers per year depend on data from the Relativistic Heavy Ion Collider Computing Facility (RHIC) that Brookhaven operates at its U.S. facility. Brookhaven also has a major role in international projects such as the ambitious Large Hadron Collider (LHC) under construction by CERN, the European Organization for Nuclear Research and the world's premier particle physics research lab. Data from RHIC experiments is proliferating at an astounding rate, and Petkus anticipates that by 2012, Brookhaven will have more than 4,000 nodes on its storage area network. With so many users and many means of accessing data, Petkus sought a unified storage environment and a single vendor to help him retain control over the implementation.
BlueArc offers precisely the combination of record-setting performance and reliability essential to deliver data that maps the speed of change of subatomic matter. Petkus and his team have deployed a two-node BlueArc Titan 2200 cluster with six-gigabit connections trunked together and 288 terabytes of Fibre Channel disk capacity. Petkus sees a two-fold advantage in the BlueArc Titan solution's distinctive hardware-based architecture, which supports multiple access protocols without requiring modification to Brookhaven's 2,000-node server farm, maximizing the value of the laboratory's technology investments and supporting growth.
"My job is to think ahead as far as I possibly can," said Petkus. "Every node in our storage network is becoming a supercomputer with massive memory and 64-bit architecture. We support huge networks, huge amounts of data and demanding physicists around the world, so I've always got to know what the latest high-performance technologies are and make choices that won't risk our data to unproven systems."
BlueArc is a leading provider of high-performance unified network storage systems to enterprise markets, as well as data-intensive markets, such as electronic discovery, entertainment, federal government, higher education, Internet services, oil and gas and life sciences. BlueArc's products support both network attached storage, or NAS, and storage area network, or SAN, services on a converged network storage platform. Bluearc enables companies to expand the ways they explore, discover, research, create, process and innovate in data-intensive environments. The company's products replace complex and performance-limited products with high performance, scalable and easy to use systems capable of handling the most data-intensive applications and environments. Further, the company believes that its energy efficient design and its products' ability to consolidate legacy storage infrastructures dramatically increases storage utilization rates and reduces its customers' total cost of ownership. Information about BlueArc solutions and services can be found at http://www.bluearc.com.
Source: BlueArc Corp.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.