Visit additional Tabor Communication Publications
October 17, 2011
The Vikram Sarabhai Space Center (VSSC) is acting as a proving ground for the future of GPUs and heterogeneous computing in India. According to an article that explored the center’s use and subsequent power, space and performance improvements following implementation of a GPU and CPU “hybrid” environment, there are significant benefits to moving from a CPU-only approach to supercomputing.
According to Vishal Dhupar who manages Nvidia’s South Asian presence, VSSC had “equipment in a single room delivering 220 teraflops” but to get to the 200+ teraflop range to run a homegrown x86-tailored CFD application called PARAS, they would have needed 5,000 CPUs. Dhupar says that Nvidia “offered them the same architecture, [ability to] use the same room and offer a quantum jump in performance with a hybrid architecture of CPUs and GPUs.” He said that by adding 400 GPUs to the existing 400 CPUs, they got to their 220 teraflop goal.
In comparison, another Indian supercomputing center, Tata CRL, has a 170 teraflops system with 3600 CPUs built at a cost of $30 million. VSSC achieved 220 teraflops with an investment of $3-3.5 million.
Dhupar says that “Only the code that was more parallelized had to be tweaked and this gave them a 40x performance boost on one account and a 60x boost on the other.”
As a further point of comparison, Prashant L. Rao writes that “There’s a substantial energy efficiency advantage from using GPUs. VSSC consumes 150 kWh for generating 220 teraflops. Tata CRL, on the other hand, is using 2.5 mWh for 170 teraflops.”
Rao also pointed to other differences between CPU-only and heterogeneous systems, noting “Cost being a perennial problem, Nvidia hopes to convince scientists that they should move their data centers onto GPUs. At the same time, it wants to boost the acceptance of CUDA. They have been looking at Message Passing Interface (MPI) for parallel computing. MPI is a subset of the CUDA framework. So, there’s no relearning. The framework has SDKs, debuggers, libraries, compilers etc. Whether you use Fortran, C or C++, it’s all supported,”
Vishal Dhupar summed up the focus on GPUs in the rapidly-growing Indian market (IDC estimates claim the HPC market in India is worth $200 million and is growing at a 10% annual rate), pointing to the price, performance and efficiency changes that hybrid computing could bring. He claims “with 2 teraflops available for $10,000, it changes the equation. We want every scientist or researcher to have this.”
This statement makes no bones about the fact that Nvidia is setting its sights on the Indian academic sector. The company hopes to provide these researchers with 2-8 teraflops on personal supercomputers and make it simple to mesh these together to form clusters or grid computing environments.
Full story at Express Computer
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.