Visit additional Tabor Communication Publications
June 20, 2011
SUNNYVALE, Calif. and HAMBURG, Germany, June 20, 2011 -- Int'l Supercomputing Conference – Panasas, Inc., the leader in high performance parallel storage for technical computing applications and big data workloads, today announced the Panasas ActiveStor 11 parallel storage system appliance. Powered by the PanFS™ operating system, ActiveStor 11 seamlessly scales to 6PB of capacity and 115GB/s of throughput from a single global namespace. Its advanced blade architecture blends performance, capacity, and cost-efficiency in a system optimized for data-intensive applications where time-to-results is a critical concern.
"Our customers know that Panasas represents the ultimate in performance, capacity and usability for computationally intensive application environments," said Faye Pairman, president and chief executive officer of Panasas. "ActiveStor 11 is an attractive solution to deliver a new level of cost effectiveness for a variety of markets, whether deployed as part of a dedicated research cluster or a multi-tenant private cloud platform."
The Panasas parallel scale-out storage architecture eliminates bottlenecks seen in traditional NAS systems, enabling HPC cluster nodes to directly access a single, scalable file system. Administrators can easily add new storage to the global namespace from a single point of management in fewer than 10 minutes without disrupting workflows. ActiveStor 11 features user quotas, snapshots, and per-user chargeback reporting so administrators can easily monitor and manage storage resources within their private cloud.
"As private clouds become more pervasive, it is clear that increasing numbers of HPC users will require highly scalable parallel storage systems that are dependable, easy to manage, and deliver the high throughput required for a wide range of technical computing applications," said Earl Joseph, IDC program vice president for high performance computing. "The Panasas ActiveStor 11 appliance is well positioned to capitalize on this important high performance computing trend."
Panasas is taking orders for ActiveStor 11 and expects to start shipping in August 2011. In addition, a 60TB configuration of the existing ActiveStor 12 appliance is also expected to be available in the same timeframe. Accompanying the new product introductions is an across-the-board price reduction on all ActiveStor models. For more information, visit www.panasas.com,
Panasas, Inc., the leader in high-performance parallel storage for technical computing applications and big data workloads, enables customers to rapidly solve complex computing problems, speed innovation and accelerate new product introduction. All Panasas storage products leverage the patented PanFS™ storage operating system to deliver superior performance, data protection, scalability and manageability. Panasas systems are optimized for demanding storage environments in the bioscience, energy, finance, government, manufacturing, and university markets. For more information, visitwww.panasas.com.
SOURCE Panasas, Inc.
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.