Visit additional Tabor Communication Publications
March 01, 2012
Replacing Rocks and other open source management toolkits enables scientists to better focus on research
SAN JOSE, California, March 1 -- Bright Computing today announced that Massey University's department of Theoretical Chemistry and Physics is now using Bright Cluster Manager to manage their powerful HPC system. The department switched from Rocks and other open source toolkits to Bright as they expand their cluster for immediate needs, and as they plan their future hardware evolution. As a result of this change, Massey University realized immediate gains in operational efficiency and reduced system management workloads. Bright also vastly reduced the time and effort required to install and test new hardware as they continually evolve the system, maximizing available compute time.
Massey University's department of Theoretical Chemistry and Physics comprises 20 scientific staff conducting more than 30 diverse research projects on the far edge of science. Their workhorse for these research projects is a 624 CPU core, 2TB RAM, and 100TB disk cluster located on campus.
The range of research at Massey is impressive. For example, one team is investigating whether the speed of light and other fundamental constants are really constant over space and time; and if not, determining how they change. This project involves analyzing the spectra from distant quasars to determine whether the speed of light has changed over the billions of years since it was emitted. Very highly accurate calculations of the dependence of atomic and molecular spectra on the speed of light are necessary for this research. In addition, the researchers are working to discover molecular systems that would have an enhanced sensitivity to the change in fundamental constants, potentially allowing experiments to take place on Earth.
Scientists at Massey are also important contributors to the study of super-heavy elements: atoms that don't exist in nature, but are artificially created in particle accelerators. The actual experiments are only possible in a few locations on earth -- and cost millions of dollars to perform. Massey scientists use HPC to predict atomic and chemical properties of these elements. They also help the researchers determine the feasibility of experiments, planning the research, and interpreting the results. These calculations are extremely compute-intensive, requiring very efficient HPC systems.
A third example of Massey's research is a project on the frontier of nano-scale physics. "Nano-devices are small enough that chemistry and quantum theory play important roles, but large enough that accurate quantum theoretical simulation is a staggering task," said Dr. James Avery. "This necessitates inventing new computational methods and performing massive calculations to understand these materials, and to predict the strange ways in which they behave, "
The diverse nature of research conducted at Massey creates a wide array of unique system requirements for their cluster. In the past, setting up the cluster for these jobs consumed seemingly endless hours of system administrator time, driving down overall system throughput whilst burdening talented researchers with tedious tasks. Now, Bright's image-based provisioning enables the scientists to reconfigure their cluster to for the specific demands of each project in minutes, vastly increasing productivity and freeing manpower for other priorities.
The team at Massey realized other benefits from using Bright.
"As we looked to evolve our system to meet our expanding needs, we were driven to find a better way to run our cluster. We are scientists; we want to spend our time on science. Not on provisioning, monitoring and management," said Dr. Michael Wormit. "We were using Rocks and other open source toolkits, wasting far too much time on customization and keeping the various tool versions synchronized. We were reluctant to make changes to the system -- it created too much overhead."
"What appealed to us about Bright Cluster Manager is that it is fully integrated and easy to use," said Dr. James Avery. "It's leaps and bounds better than anything we have worked with in the past." Using Bright has enabled the department's three part-time system administrators to shift their workloads to priorities that were previously put on hold due to day-to-day system administration demands. Now they are able to work on initiatives to improve their cluster's performance, such as setting up InfiniBand to reduce latency.
"Installing a cluster with Bright is much easier and faster," added Dr. Michael Wormit. "It's especially useful that we can install new compute nodes to our cluster in minutes, or quickly re-purpose hardware. Bright minimizes the effort of tasks that previously took a lot of work, in a straightforward, intuitive manner. It's also helpful to us that Bright has a complete development environment with everything we need."
About Massey University
For more than 80 years, Massey University has helped shape the lives and communities of people in New Zealand and around the world. Its forward-thinking spirit, research-led teaching, and cutting-edge discoveries make Massey New Zealand's defining university. Massey is known for groundbreaking research, the applied nature of its diverse teaching and research programs, its contribution to industry, its innovation and its tradition of academic excellence. http://www.massey.ac.nz/massey/home.cfm
About Bright Computing
Bright Computing specializes in management software for clusters, grids and clouds, including compute, storage, Hadoop and database systems. Bright's fundamental approach and intuitive interface makes cluster management easy, while providing powerful and complete management capabilities for increasing productivity. Bright Cluster Manager is the solution of choice for many research institutes, universities, and companies across the world, and is used to manage several Top500 installations. Bright Computing has its headquarters in San Jose, California. http://www.brightcomputing.com
Source: Bright Computing
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.