Visit additional Tabor Communication Publications
November 08, 2010
There seems to be no end to the roll-out of GPGPU-accelerated server offerings this year. The latest comes from server maker AMAX, which has introduced what it says is the densest GPU computing system on the market. The ServMax AS-5160G is a 16-GPU, 4-CPU system that delivers more than 8 teraflops of high performance computing in a mere 5U of rack space.
Headquartered in Fremont, California, AMAX is a 30-year-old company that builds server and storage solutions for the enterprise and high performance computing markets. Its rise in the HPC space coincided with the commercialization of general-purpose GPU computing in 2007. The company got onboard the GPGPU bandwagon early, joining NVIDIA as an initial launch partner when the first generation Tesla processors were introduced. Since then, AMAX has ridden the ascent of the technology into high performance computing. Today, about half of the company's revenue is derived from HPC customers, and about half of that now comes from GPU-equipped gear.
The ServMax AS-5160G is the newest in a series of GPGPU-based workstations, servers and clusters from AMAX. The majority of these are based on NVIDIA gear, although AMAX does offer one model equipped with AMD's ATI FireStream 9370. The company's initial 1U dual-GPU server led to a 4U 4-GPU box, and then to a 4U 8-GPU box. But according to Matt Thauberger, AMAX's corporate technical advisor, HPC customers can't seem to get enough of those clever little graphics chips and are demanding them in the densest footprint possible.
For that crowd, the 5U AS-5160G should look pretty attractive. The design marries a 16-GPU 3U chassis with 2 dual-socket x86 1U servers. The 3U graphics box is AMAX's ServMax AS-3160G building block, equipped with 16 NVIDIA Fermi Tesla modules. As is the case with most of these big GPGPU boxes on the market, the graphics modules are hot-pluggable and can be easily slid in and out for the sake of serviceability. On the host side are the two 1U servers, which can come with either AMD Opteron or Intel Xeon CPUs. The CPUs are there mainly to drive the 16 GPUs, where most of the computational muscle lies (16.48 teraflops of single-precision floating point or half of that in double-precision). Scaled up to a 42U rack, an AS-5160G cluster will deliver over 64 double-precision teraflops.
The company is touting this newest GPGPU offering as the densest on the market. That depends on how you slice it, though. Appro recently introduced its Tetra 1U server, which is equipped with two x86 CPUs and four Fermi GPUs. If you string four of these together, you get 16 GPUs plus 8 CPUs in a 4U space. That would edge out the ServMax AS-5160G for sheer computational performance.
Other competition includes Dell, which can build a 7U 16-GPU solution from its 3U PowerEdge C410x GPU expansion chassis and two 2U PowerEdge 6100 servers. HP is also in the running with its new dual-socket 3-GPU ProLiant SL390s G7; coupling four together will yield a nifty little 4U 12-GPU cluster.
The AMAX system would be most energy efficient of any of these, though, since it uses only four CPUs to drive its 16 graphics processors. For applications that can throw most of the computation to the GPU, you're likely to need only a single CPU core to drive each GPU. So given that even low-end server chips nowadays are quad-core, four CPUs seems like plenty. When 16-core AMD "Interlagos" Opteron is released next year, some enterprising server maker may decide to build a 3U 16-GPU chassis with a single CPU to rule them all.
Of course, relatively few HPC codes these days are so GPU-hungry, so customers usually have to hedge their bets and keep more CPUs around. But for applications such as seismic codes, image rendering, financial modeling and many vector-heavy science codes, graphics processor usage can often be maxed out. It is this application set that AMAX is primarily targeting with the GPU-rich AS-5160G.
Although the company is not offering pricing information on the new product, they believe their cost is about 10 percent below the competition for similar equipment -- that's according to customer feedback they've received. In addition, since AMAX doesn't institute an MOQ (minimum order quantity) for its gear, as some of the tier 1 vendors are want to do, even a single 5U system can be installed for customers of modest needs.
Not that AMAX particularly specializes in small HPC setups. According to Thauberger, most GPGPU systems they've installed run between 60 to 200 servers. Customers range across the usual HPC segments, but are especially concentrated in oil and gas exploration companies, universities, the US Department of Defense, research labs, and financial services firms. The company currently claims over 40 customers for its Tesla-equipped gear and expects that number to increase.
They also expect their footprint to expand geographically. Today most of their business is in the US, but the company is seeing demand in China ramping up. AMAX has two branch offices in China (Suzhou and Shanghai) and one in Taiwan to serve that market. "China is one of the larger markets for this technology due to their oil & gas firms and research organizations," says Thauberger.
The AS-5160G is available immediately and AMAX will be demonstrating the new system, along with its other HPC gear, at SC10 next week in New Orleans. As mentioned before, pricing has not been made public, but I'm sure AMAX will be happy to offer quotes to potential customers.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.