Visit additional Tabor Communication Publications
January 22, 2010
SAN JOSE, Calif., Oct. 19 -- Bright Computing, a specialist in cluster management software and services for high-performance computing (HPC), is pleased to announce that the University of Houston's Texas Learning and Computation Center (TLC2) has chosen Bright Cluster Manager as the preferred software for managing and using its HPC clusters. Bright Cluster Manager was installed this summer on a 160 node cluster at TLC2 in less than a day and the cluster has been in full operation since.
"I am very impressed with the efficiency achieved with Bright Cluster Manager. Our cluster with 1280 cores was up and running within a few hours, ready for integration into our HPC environment. Now it is continuing to save our system administrators valuable time," said Professor Lennart Johnsson, director of the TLC2 and the Advanced Computing Research Laboratory at the University of Houston.
The new cluster, named Xanadu, consists of 160 Dell PowerEdge servers with AMD Opteron Barcelona quad-core CPUs and multiple Ethernet networks. Dr Ioannis Konstantinidis, Research Development Officer at TLC2, comments on the benefits offered by Bright Cluster Manager: "We have tried many different cluster management solutions in the past. We found Bright Cluster Manager to have a well integrated centralized management interface which is powerful and flexible enough to accommodate our specific configuration requirements. For example, we like to setup our clusters with specific network configurations. Unlike previously used solutions, with Bright Cluster Manager this was very easy to configure through the intuitive GUI or the powerful command line shell."
Erik Engquist, TLC2 systems administrator, adds: "The centralized status and health information database simplifies troubleshooting and reduces service disturbances."
Bright Cluster Manager is a Linux-based cluster management software specifically designed to make HPC clusters of any size easy to install, use and manage. Its intuitive graphical user interface offers a consistent access to all management and monitoring functionality for the cluster administrators. Its HPC user environment provides a comprehensive range of HPC software development tools for the cluster users.
About Bright Computing
Bright Computing is a specialist in cluster management software and services for high-performance computing (HPC). Its flag-ship product -- Bright Cluster Manager -- makes clusters of any size easy to install, use and manage, and is the cluster management solution of choice for many universities, research institutes and companies across the world. Bright Computing has its head office in San Jose, Calif.
About University of Houston, Texas Learning and Computation Center, and the Advanced Computing Research Laboratory
The University of Houston (UH), Texas' premier metropolitan research and teaching institution, is home to more than 40 research centers and institutes and sponsors more than 300 partnerships with corporate, civic and governmental entities. With more than 35,000 students, the University of Houston is the most diverse research university in the country.
The Texas Learning and Computation Center (TLC2) fosters and supports interdisciplinary research, education, and training in computational sciences and engineering. TLC2 has state-of-the-art computation, visualization and educational facilities for environmental studies, biological, biomedical, and energy research, undergraduate and graduate education, and teacher training.
The Advanced Computing Research Laboratory (ACRL) carries out research on innovative ways to harness computational resources for scientific and engineering applications and participates in several national and international research efforts in high-performance computing, storage, and networking.
Source: Bright Computing, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.