Visit additional Tabor Communication Publications
September 19, 2012
SAN JOSE, Calif., Sept. 18 — Bright Computing announced today that Boise State University researchers selected Bright Cluster Manager for their collaboration research cluster R1, a powerful computer system that drives interdisciplinary computational research. Boise State University research projects span chemistry, biology, physics and pharmacology, with particular emphasis on advancing novel methods of molecular targeted therapeutics for cancer research. Bright Cluster Manager is being used for provisioning, job scheduling, monitoring and cluster management, and was selected over open source toolkits previously used at the university.
Boise State University’s cluster is powered by AMD Opteron CPUs and NVIDIA Tesla GPUs on Supermicro motherboards. One of R1’s unique outputs – which Ken Blair, HPC Systems Engineer at Boise State is exploring – is the ability to display very high-resolution images across parallel display panels, making its complex simulations come alive.
“In the past, I’ve tasked graduate students with cluster management using open source tool-kits,” said Blair. “This approach was low cost but time consuming on my part, and somewhat risky. At Boise State, we don’t have the bandwidth to write scripts for node installation, synchronization or for ongoing cluster maintenance— all extremely time-intensive tasks. Bright makes tackling these tasks fast and easy, and lets us automate a lot of important but tedious procedures.”
“Bright cuts my own workload by 50%,” Blair added, “and pays for itself over and over in terms of headcount savings.”
The cluster at Boise State has an added layer of complexity: the system is located behind a federal firewall at the Idaho National Laboratory— a leading nuclear research and development facility for the U.S. Department of Energy. Directly accessing the cluster is impossible on a day-to-day basis, creating the potential to kill productivity when nodes aren’t working properly or crash unexpectedly, Boise State University researchers need a way to quickly troubleshoot and resolve issues remotely
“At one point, several of our nodes were rebooting for no apparent reason,” said Blair. “Bright’s support team advised me how to use secure shell (SSH) to create a tunnel to the web interface of my IPMI controller so that I could access the console. Within a very short period of time, our cluster was up and running again, and a situation that could have presented a major issue was averted. Bright’s responsive team is unparalleled in terms of their around-the-clock accessibility and problem-solving abilities.”
About Boise State University
Boise State University is committed to fostering an environment where exceptional research and creative activity thrive. The university has well developed and productive research programs in such diverse areas as sensor development, bio-molecular research, novel materials, health and public policy, geochemistry and geophysics, raptor studies, high-tech economic development, nano-electronics and integrated systems, and school improvement in math and sciences.
Boise State faculty conduct externally funded studies in Idaho and around the globe. Their research contributes to addressing some of the major health, environmental, technological and social issues of the day. http://www.boisestate.edu/
About Bright Computing
Bright Computing specializes in management software for clusters, grids and clouds, including compute, storage, Hadoop and database clusters. Bright’s fundamental approach and intuitive interface makes cluster management easy, while providing powerful and complete management capabilities for increasing productivity. Bright Cluster Manager is the solution of choice for many research institutes, universities, and companies across the world, and manages several Top500 installations. Bright Computing has its headquarters in San Jose, California. http://www.brightcomputing.com.
Source: Bright Computing
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.