Visit additional Tabor Communication Publications
November 08, 2010
There seems to be no end to the roll-out of GPGPU-accelerated server offerings this year. The latest comes from server maker AMAX, which has introduced what it says is the densest GPU computing system on the market. The ServMax AS-5160G is a 16-GPU, 4-CPU system that delivers more than 8 teraflops of high performance computing in a mere 5U of rack space.
Headquartered in Fremont, California, AMAX is a 30-year-old company that builds server and storage solutions for the enterprise and high performance computing markets. Its rise in the HPC space coincided with the commercialization of general-purpose GPU computing in 2007. The company got onboard the GPGPU bandwagon early, joining NVIDIA as an initial launch partner when the first generation Tesla processors were introduced. Since then, AMAX has ridden the ascent of the technology into high performance computing. Today, about half of the company's revenue is derived from HPC customers, and about half of that now comes from GPU-equipped gear.
The ServMax AS-5160G is the newest in a series of GPGPU-based workstations, servers and clusters from AMAX. The majority of these are based on NVIDIA gear, although AMAX does offer one model equipped with AMD's ATI FireStream 9370. The company's initial 1U dual-GPU server led to a 4U 4-GPU box, and then to a 4U 8-GPU box. But according to Matt Thauberger, AMAX's corporate technical advisor, HPC customers can't seem to get enough of those clever little graphics chips and are demanding them in the densest footprint possible.
For that crowd, the 5U AS-5160G should look pretty attractive. The design marries a 16-GPU 3U chassis with 2 dual-socket x86 1U servers. The 3U graphics box is AMAX's ServMax AS-3160G building block, equipped with 16 NVIDIA Fermi Tesla modules. As is the case with most of these big GPGPU boxes on the market, the graphics modules are hot-pluggable and can be easily slid in and out for the sake of serviceability. On the host side are the two 1U servers, which can come with either AMD Opteron or Intel Xeon CPUs. The CPUs are there mainly to drive the 16 GPUs, where most of the computational muscle lies (16.48 teraflops of single-precision floating point or half of that in double-precision). Scaled up to a 42U rack, an AS-5160G cluster will deliver over 64 double-precision teraflops.
The company is touting this newest GPGPU offering as the densest on the market. That depends on how you slice it, though. Appro recently introduced its Tetra 1U server, which is equipped with two x86 CPUs and four Fermi GPUs. If you string four of these together, you get 16 GPUs plus 8 CPUs in a 4U space. That would edge out the ServMax AS-5160G for sheer computational performance.
Other competition includes Dell, which can build a 7U 16-GPU solution from its 3U PowerEdge C410x GPU expansion chassis and two 2U PowerEdge 6100 servers. HP is also in the running with its new dual-socket 3-GPU ProLiant SL390s G7; coupling four together will yield a nifty little 4U 12-GPU cluster.
The AMAX system would be most energy efficient of any of these, though, since it uses only four CPUs to drive its 16 graphics processors. For applications that can throw most of the computation to the GPU, you're likely to need only a single CPU core to drive each GPU. So given that even low-end server chips nowadays are quad-core, four CPUs seems like plenty. When 16-core AMD "Interlagos" Opteron is released next year, some enterprising server maker may decide to build a 3U 16-GPU chassis with a single CPU to rule them all.
Of course, relatively few HPC codes these days are so GPU-hungry, so customers usually have to hedge their bets and keep more CPUs around. But for applications such as seismic codes, image rendering, financial modeling and many vector-heavy science codes, graphics processor usage can often be maxed out. It is this application set that AMAX is primarily targeting with the GPU-rich AS-5160G.
Although the company is not offering pricing information on the new product, they believe their cost is about 10 percent below the competition for similar equipment -- that's according to customer feedback they've received. In addition, since AMAX doesn't institute an MOQ (minimum order quantity) for its gear, as some of the tier 1 vendors are want to do, even a single 5U system can be installed for customers of modest needs.
Not that AMAX particularly specializes in small HPC setups. According to Thauberger, most GPGPU systems they've installed run between 60 to 200 servers. Customers range across the usual HPC segments, but are especially concentrated in oil and gas exploration companies, universities, the US Department of Defense, research labs, and financial services firms. The company currently claims over 40 customers for its Tesla-equipped gear and expects that number to increase.
They also expect their footprint to expand geographically. Today most of their business is in the US, but the company is seeing demand in China ramping up. AMAX has two branch offices in China (Suzhou and Shanghai) and one in Taiwan to serve that market. "China is one of the larger markets for this technology due to their oil & gas firms and research organizations," says Thauberger.
The AS-5160G is available immediately and AMAX will be demonstrating the new system, along with its other HPC gear, at SC10 next week in New Orleans. As mentioned before, pricing has not been made public, but I'm sure AMAX will be happy to offer quotes to potential customers.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.