Visit additional Tabor Communication Publications
October 01, 2010
Appro's John Lee has a nice piece over at Enterprise Systems discussing how the falling costs of high-performance computing have expanded the market and lowered the barriers to entry.
Because HPC is no longer exclusively about the traditional scientific community, supercomputing has become a way for businesses to stay cutting-edge in the market. New product development benefits (such as modeling and simulations) and more affordable HPC over the past 10 years makes being out in front an achievable goal for organizations of all sizes.
Lee notes how users have benefited from the advent of multicore computing and advances in parallel computing. The commoditization of the server space and competition between rival vendors (Intel versus AMD) have further contributed to the democratization of HPC.
The cutting-edge of computing is always a moving target. What was once only available to the most demanding user eventually becomes accessible to the mainstream business user and eventually to the average consumer. For example, users with a high-end desktop platform can now run certain simulation and modeling applications that previously required an expensive cluster.
The falling costs of hardware, applications, and related technological advances, have made supercomputing possible for a new wave of enterprise users for whom the technology was previously out of reach. Many members of this so-called "missing middle" have been waiting for such a chance to access this technology for its tremendous competitive advantages.
Another way this missing middle can gain entry onto the HPC playing field, although not mentioned in the Enterprise Systems article, is via the cloud. The missing-middle is teaming with small-to-mid-sized businesses eager to take advantage of high-end technologies, but lacking the kind of funding needed to attract the attention of big-name HPC vendors. Cloud computing has changed that dynamic. With cloud platforms, like Amazon's EC2 and Penguin Computing's Penguin on Demand (aka POD), users are in essence pooling their buying power to access a pool of HPC resources. With cheaper buy-in compared to on-premise machines, less overhead, and the flexibility to shrink or expand usage depending on demand, cloud is a natural choice for many in the missing middle. For some, cloud may create a door to HPC where none previously existed.
Whether a user chooses an on-premise cluster or a hosted platform or some combination thereof to run their applications will depend on how their particular needs align with the perceived benefits and costs of the system. As always, the buyer's obective is obtaining the highest performance with the lowest pricing, and it's the vendor's job to maintain an attractive cost-to-benefit proposition.
Full story at Enterprise Systems
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.