Visit additional Tabor Communication Publications
May 10, 2010
GLEN BURNIE, Md., May 10 -- Padova Technologies, a leading provider of storage solutions, high performance computing (HPC) systems, specialized servers and rugged and deployable solutions, announced today that it has partnered with San Jose, Calif.-based, Bright Computing to offer Bright Cluster Manager, a software product that makes HPC Linux clusters of any size easy to use, manage and scale.
"Padova is thrilled to offer the HPC community an innovative solution to simplified cluster management," said Keith Fischbach, president of Padova Technologies. "Bright Cluster Manager addresses the common technical challenges of installing and running clusters -- ultimately enabling organizations increased productivity. What's more this easy-to-manage cluster software will equip businesses and research institutions with the power and flexibility to build and run their own HPC solution at a considerable cost savings."
Bright Cluster Manager is a Linux-based cluster management software solution specifically designed to address the complexity of running clusters. Its intuitive graphical user interface offers consistent access to all management and monitoring functionality for the cluster administrators. Its HPC user environment provides a comprehensive range of HPC software development tools for the cluster users.
"We are pleased to welcome Padova Technologies as a new authorized reseller of Bright Cluster Manager," said Dr Matthijs van Leeuwen, chief executive officer of Bright Computing, "Padova has a long standing in the US HPC market and combines technical expertise in HPC server technology with a substantial network of satisfied customers."
With more than 35 years of experience, Padova has engineered solutions for some of the nation's most influential research institutions, military organizations, global defense and technology corporations and other government entities. Its HPC design work includes traditional cluster models as well as large shared SMP memory systems that can be deployed in a datacenter, workgroup or deskside friendly environment.
About Padova Technologies
Since 2000, Padova Technologies has been designing, integrating and supporting high performance computing, servers and storage products. Privately held and located in Glen Burnie, Md., Padova has designed products and customized technology solutions for some of the nation's most influential research institutions, military organizations, global defense and technology corporations and other government entities. For more information on Padova, its solutions and products, visit www.padovatech.com.
About Bright Computing
Bright Computing is a specialist in cluster management software and services for high-performance computing (HPC). Its flag-ship product -- Bright Cluster Manager -- makes clusters of any size easy to install, use and manage, and is the cluster management solution of choice for many universities, research institutes and companies across the world. Bright Computing has its head office in San Jose, Calif.
Source: Padova Technologies
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.