Visit additional Tabor Communication Publications
May 04, 2010
The new "Fermi" Tesla 20-series products from NVIDIA are about to hit the streets and HPC vendors are lining up to get the latest GPU goodies into their machines. This week, HPC cluster maker Appro has launched two Fermi-based systems: an updated GPU-accelerated GreenBlade offering and a brand new 1U server that puts 2 CPUs and 4 GPUs in the same box.
Launched is maybe too strong a word. According the John Lee, Appro's vice president of advanced technology solutions, the new products won't be shipping until late May or early June when the Fermi chips finally start rolling out of the TSMC fabs in volume. But Appro is already taking orders for the new systems and is expecting NVIDIA's third-generation CUDA hardware to light a fire under the GPU acceleration business.
As an HPC specialist, Appro has been following NVIDIA's GPU computing ascendence with much interest. Fermi is the first graphics processor to bring ECC memory, hardware support for C++, and more than half a teraflop of double precision to the GPU computing realm. With the vector-like processor about to debut, what was once a two-CPU rivalry between Intel and AMD is now a much more interesting three-way race. "I think it's a pretty huge milestone for high performance computing," says Lee.
Both new Appro offerings will make use of the M2050 Tesla modules from NVIDIA, which are being integrated onto the system motherboards rather than attached as a standalone card that plug into a PCIe slot. As it turns out the M-series devices are the only ones NVIDIA is going to certify for datacenter deployment. According to Lee, the GPU maker is not supporting C-series cards in the rackmount form factors. Those are intended only for workstations and deskside systems. The M2050 comes with 3 GB of GDDR5 memory and delivers about 515 double precision gigaflops per GPU or just over a teraflop if your app can get by with single precision floating point.
Appro's Fermi option on the GreenBlade is based on a one-to-one pairing of CPUs and GPUs. The 5U enclosure consists of 5 dual-CPU blades hooked up to 5 dual-GPU expansion blades using a PCIe link. The CPUs may be either late model AMD Opterons or Intel Xeons, but most of the FLOPS are provided by the GPUs. A fully configured enclosure delivers more than 5 raw teraflops of double precision goodness.
The Fermi-revved GreenBlade is aimed at small to mid-sized cluster deployments of GPUs for users that need a balance of CPU and GPU resources or who may be otherwise be constrained from denser GPU configurations for lack of available power. One advantage of the CPU-GPU blade separation is the ability to upgrade components individually. Given that CPUs are GPUs are on different refresh cycles -- and generally the cadence for the GPU refresh is somewhat faster -- it should be possible to snap in new blades whenever Intel, AMD, or NVIDIA release the next generation of their silicon.
Appro's second product is a new 1U server that holds four M2050 Tesla GPUs plus two CPUs (either Xeon 5600 or Opteron 6100 processors). Called the Tetra -- 4 GPUs, get it? -- Appro is claiming it is the densest CPU-GPU combo in the industry. Each 1U enclosure delivers two double precision teraflops, plus change. For external storage, there's support for up to six 3 TB SATA drives.
As you can imagine, it takes plenty of juice to run the Tetra. The server comes with a 1,400 watt power supply and a whopping 12 cooling fans.
According to Lee, the Tetra is aimed at two customer sets: 1) customers who might otherwise opt for NVIDIA's quad-GPU S-series servers and 2) those looking to deploy GPUs at scale and wanting to maximize floating point density in the datacenter.
NVIDIA's own 1U Tesla boxes -- the previous generation S1060 and the upcoming Fermi-based S2050 and S2070 -- offer 4 GPUs per server, but have to be connected to a host CPU box via a PCI Express cable. By integrating CPUs and GPUs in the same 1U enclosure, Appro believes Tetra can usurp a chunk of this market.
The other Tetra market is for really big systems where codes scale particularly well on the GPU -- oil and gas apps and all sorts of science codes that have an insatiable appetite for matrix math. "With this particular product, you can theoretically fit about 80 teraflops of double precision performance into a single rack," says Lee. " We've very close to getting to that magical 100 teraflops per rack."
Although Appro is not releasing specific pricing on the Tetra, Lee says he believes the new platform will be a very cost-effective solution for users looking to maximize double precision FLOPS/dollar. He estimates an entry level Tetra server would cost approximately $11-12K, while a more richly configure system could run $15-16K.
The most important configuration choices for both new systems are CPU type and memory capacity. Those selections will mostly be a function of how much of your code is (or can be) ported to the GPU, since unported apps will be confined running on the CPU host hardware.
Appro has conveniently provided intelligent power control for these systems so that when the GPU parts are idle, they can be shut off. Since each M2050 module is drawing 225 watts, the energy savings will add up fast when these systems are in CPU-only mode. Of course, once you've gone to the expense to buy all these Fermis, there's going to a lot of incentive to migrate as many of your production codes to the GPU as possible, especially considering that performance per watt numbers can be an order of magnitude better on the GPU than on its CPU counterpart.
Lee says the low-hanging fruit for GPU acceleration is the energy sector and big government labs, with biotech firms and financial institutions a close second. One of the first installations for Appro's Fermi gear will be at the Virginia Polytechnic Institute and State University. That system is scheduled for deployment in July. The company also has an order from an oil and gas company, which will remain anonymous.
Although Appro is one of the first cluster vendors out of the gate with new Fermi offerings (HPC ODM vendor AMAX previewed its Tesla 20-series offerings last month), Supermicro also announced its new Fermi gear this week. Expect more HPC system vendors large and small to roll out their latest Tesla-accelerated machinery over the coming weeks.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.