Visit additional Tabor Communication Publications
April 26, 2010
When I look back at the way that infrastructures have evolved, it becomes painfully obvious how organizations running high performance computing (HPC) infrastructures have sacrificed the easy and all-inclusive management platforms found in mainframe and symmetric multiprocessor (SMP) environments for the best-of-breed components of highly scalable and cost-effective grid and cluster computing models. Naturally, this has resulted in a world where open source and commercial components are assembled "by hand," with each certified individually and production support required from multiple, unrelated vendors. Not surprisingly, these challenges have paved the way for cluster and grid management products that fully integrate components in a packaged HPC management platform. Rather than building your own management platform, it might just be easier to look at an "all-in-one" solution that covers the entire lifecycle of application development -- providing a cheaper and more responsive HPC environment.
In my experience, one of the first problems that pops up around independent software components is in application development. Developers have to understand the minute details of the grid and cluster environment where their applications are deployed. For example, developing an application using message passing interface (MPI) typically requires a lot of specialized code to make it work well on the target environment. By providing an easier, more complete approach, developers won't have to worry about the mechanics of the production cluster -- instead it's handled by integrating with hardware platforms, monitoring, and provisioning tools -- and they can instead focus on the application business logic.
Once an application is ready to move into production, it's the IT departments that are forced to work with several independent -- and often manual -- tools and processes to get the application up and running. Once the application is in production, I've noticed that there is often significant time and effort wasted with non-intuitive monitoring and alerting systems, ineffective troubleshooting tools and scattered vendor support. But with a fully-integrated platform for managing HPC applications, IT gains a single management interface to effectively deal with these issues. This -- ensures application problems are detected early and resolved quickly and results in better service to the application teams.
My advice to organizations looking to have their HPC infrastructures perform at optimized levels is to consider several aspects and capabilities of an end-to-end HPC management platform during the evaluation process, particularly since success often hinges on partnerships and integration capabilities.
Here are some of the questions an organization should consider when evaluating an HPC management platform:
Just asking a few pointed questions should help determine which type of HPC management platform is right for a particular HPC scenario, be it a build-your-own system with multiple manual components to manage and monitor, or an "all-in-one platform" that improves cost-to-performance ratio.
Posted by William Lu - April 26, 2010 @ 4:03 PM, Pacific Daylight Time
William Lu is director of HPC product marketing at Platform Computing. He has 20 years of experience in HPC development, consulting and marketing, and holds a PhD in high energy physics.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.