Visit additional Tabor Communication Publications
January 09, 2013
SUNNYVALE, Calif., and LONDON, Jan. 9 – Panasas, Inc., the leader in high performance parallel storage for technical computing applications and big data workloads, today announced that the UK's University of Nottingham has upgraded its high performance computing (HPC) center with Panasas ActiveStor 12 storage in a 240 terabyte deployment. The new cluster is used by numerous departments across the university, including computer science, pharmacy and engineering.
"We are delighted that the University of Nottingham chose Panasas to satisfy its HPC storage requirements," said Barbara Murphy , chief marketing officer at Panasas. "ActiveStor gives the university unmatched performance, scalability and reliability without complex and time-consuming system management. We look forward to continuing to work with the university, as well as our many other academic customers in the region."
The University of Nottingham, ranked in the UK's top 10 in the Shanghai Jiao Tong (SJTU) World University Rankings and within the top 100 in the QS World University Rankings, first upgraded to Panasas in 2007 when it purchased an ActiveStor 7 solution to overcome performance problems associated with its previous storage system.
"We saw a big improvement in performance with the acquisition of Panasas ActiveStor," said Chris Booth , senior systems developer. "Also, our previous storage system went down about once a month. ActiveStor has never gone down – ever."
Researchers in the Physical and Theoretical Chemistry Department, whose work includes the simulation of proteins to understand diseases and enable the development of drugs to help fight or prevent them, are among the most demanding users of the HPC center. Their simulation of the motion of proteins is a complex task that can involve trillions of time-steps to map each movement of every protein, requiring a high-performance compute cluster and parallel storage.
"Our simulations are computationally challenging, but with the new high performance computer systems and ActiveStor parallel storage we're starting to make some progress," said Professor Jonathan Hirst , head of the Physical and Theoretical Chemistry Department. "Reliability and unflagging storage performance are indispensable for our research."
With limited IT staff, ease-of-use was a primary consideration for the university. "ActiveStor is fantastically easy to configure and manage," said Dr. Booth. "We don't have a dedicated storage administrator, so it's essential that our systems don't take a lot of time and effort to manage."
The University of Nottingham, described by The Sunday Times University Guide 2011 as 'the embodiment of the modern international university', has 40,000 students at award-winning campuses in the United Kingdom, China and Malaysia. It is ranked in the UK's Top 10 and the World's Top 75 universities by the Shanghai Jiao Tong (SJTU) and the QS World University Rankings. It was named 'the world's greenest university' in the UI GreenMetric World University Ranking 2011. More than 90 per cent of research at The University of Nottingham is of international quality, according to the most recent Research Assessment Exercise. The University's vision is to be recognised around the world for its signature contributions, especially in global food security, energy & sustainability, and health. The University won a Queen's Anniversary Prize for Higher and Further Education in 2011, for its research into global food security.
Panasas, Inc., the leader in high-performance parallel storage for technical computing applications and big data workloads, enables customers to rapidly solve complex computing problems, speed innovation and accelerate new product introduction. All Panasas storage products leverage the patented PanFS storage operating system to deliver superior performance, data protection, scalability and manageability. Panasas systems are optimized for demanding storage environments in the bioscience, energy, finance, government, manufacturing, and university markets.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.