Visit additional Tabor Communication Publications
December 16, 2005
As more businesses are looking towards high-performance computing to solve problems for their increasingly sophisticated applications, a burgeoning demand is being created for enterprise HPC. Industries such as Automotive, Aerospace, Pharmaceutical, Oil & Gas and Financial Services now require computationally intensive processing to compete effectively in their markets. These industries are beginning to see HPC as the core of the value chain and are searching for ways to put it into production. At the same time, the rapid rise of clustered and grid computing architectures for HPC applications has created a cost-effective model for business organizations to meet their increased computing needs.
Sun Microsystems is leveraging its expertise in both HPC and mission-critical enterprise computing to address this large and expanding market. Sun has developed a variety of products and strategies that are designed with the enterprise HPC user in mind. Sun's solutions reflect the current trend in the enterprise to use commodity hardware and software in a clustered or grid computing environment. In addition, Sun is focusing on the requirements inherent in enterprise IT, including business-oriented acquisition, high reliability, ease-of-deployment and ease-of-use.
Sun has four principle offerings for enterprise HPC users, most of which have been introduced within the past year. They include the Sun Solution Center for HPC, the Sun Fire x64 server family, the Sun Grid Rack System, and the Sun Grid Compute Utility. They are targeted to provide different levels of solutions for the enterprise and are described in some detail below.
Sun Solution Center for HPC
Opened this past November, the Sun Solution Center in Hillsboro, Oregon is designed to make HPC practical and attainable for a wide array of customers and partners. The facility offers customers access to Sun scientists and algorithm experts who specialize in developing and deploying large-scale HPC solutions, and also provides them access to HPC infrastructure at the facility. In this environment, customers will have the option of deploying and running their applications on a variety of operating systems, including Solaris, Linux or Windows. The facility and HPC experts can help customers build and achieve large-scale HPC clusters and data centers as they experiment, benchmark, test and optimize scalable grid-based applications.
As the need for computational power grows, scientists, researchers and engineers need to run simulations that require thousands of times more compute power than current systems deliver. Today, one of the most cost-effective ways to meet this need is with clusters of systems. Currently, the Sun Solution Center for HPC includes 664 systems with 2144 processor cores, all connected with a high-speed InfiniBand network, with over 10 teraflops of compute capacity. As managing large-scale clusters can be extremely challenging to deploy, operate effectively, power and cool, the Solution Center plans to provide support for customers deploying large-scale compute cluster environments, and offer them the opportunity to test their applications and achieve optimal performance.
Customers such as Aachen University and Clemson University plan to use the new facility for testing upcoming HPC projects. "With the new Sun Solution Center for HPC we can test and tune our grid-based applications, leveraging the highest-performance x64 servers in the market," said Jim Leylek, director and professor of mechanical engineering, Advanced Computational Research Laboratory, Clemson University. "Through its high-performance, low-cost and easy-to-deploy offerings, Sun is making HPC production ready and accessible."
Sun Fire x64 Servers
The Sun Fire "Galaxy" x64 servers, launched in September of 2005, represent Sun's flagship hardware platform for enterprise HPC. Powered by dual-core 64-bit AMD Opteron processors, the new servers consume about one-third the power, are one-and-a-half times the performance, and cost half as much as comparably configured 4-way Xeon-based servers from Dell. Although, the servers come equipped with the open-source Solaris 10 OS, Windows and Linux are also supported.
The top-of-the-line Galaxy servers, the Sun Fire X4100 and Sun Fire X4200, are the first x64 servers based on designs from the team of one of Sun's founders, Andy Bechtolsheim. "The new Sun Fire X4100 and Sun Fire X4200 servers are designed to deliver the highest CPU performance in an enterprise-class 1U and 2U chassis, with complete remote management capabilities," said Andy Bechtolsheim, chief architect and senior vice president, Network Systems Group, Sun Microsystems. "These systems deliver a combination of performance, features and value to customers that is not available from any other server supplier today."
Sun Grid Rack System
The Sun Grid Rack Systems allow users to custom-configure Sun Fire x64 servers in a rack. A system may be configured with a choice of Sun Fire X2100, Sun Fire X4100 or Sun Fire X4200 servers, the Sun Secure Application Switch - N1000 Series, a choice of operating systems and the Sun N1 System Manager. System may be configured online with the Sun Grid Rack System Configuration Tool, which interactively guides customers to choose components. The Sun Grid Rack System is targeted for applications like electronic design automation, mechanical computer-aided engineering, petroleum reservoir simulation and seismic processing, life sciences research, and any other application that requires high-performance and scalability.
Like the stand-alone x64 servers, Solaris 10 is the default OS, but Red Hat Enterprise Linux or SUSE Linux may also be pre-loaded, and Microsoft Windows is also supported. According to company officials, a Sun Grid Rack System containing 32 Sun Fire X4100 servers, the Solaris 10 OS and the Sun Java System Application Server is 50 percent less expensive than comparable offerings from IBM or HP equipped with Intel Xeon processors.
"What we can do is build and test the configuration in the factory and send it to the customer ready-to-go," explained Bjorn Andersson, Director of HPC and Grid Computing, Sun Microsystems. "So once it arrives, it's a matter of hours to get the system up and running, rather than days or weeks."
Sun Grid Compute Utility
The Sun Grid Compute Utility is designed to help customers derive benefits from grid-based computing infrastructure on a utility basis by giving them choice and control over how they purchase and leverage IT. The Sun Grid compute utility charges a flat rate of $1 per CPU-hour. This pay-per-use model allows users to purchase computing power as they need it, without owning and managing the assets, enabling organizations to address budget issues by moving capital expenditures to operational expenditures.
Many enterprises are faced with resource utilization issues - they need vast amounts of compute power, but only at certain times. The rest of the time, resources are underutilized. This is true in many industries, including financial services and energy, where large batch jobs, such as risk/portfolio analysis and seismic processing, are run seasonally or on a project basis.
As Stuart Wells, executive vice president, utility computing, Sun Microsystems, explained: "Sun Grid allows organizations to accommodate the peaks and crests in their business cycles. Customers can leverage their infrastructures for sustained capacity, while using Sun Grid to dial up or down their usage. Businesses are looking for alternatives to increase data center capacity and reduce idle cycles. Sun Grid helps provide such an alternative."
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.