Visit additional Tabor Communication Publications
June 12, 2012
RENNES, France, June 12 -- OpenACC is an initiative from CAPS, CRAY, NVIDIA and PGI to provide a new open parallel programming standard. Based on a common set of directives for C and Fortran languages, OpenACC enables programmers to easily take advantage of the processing power of heterogeneous many-core architectures. Its multi-platform and multi-vendor compatible model offers a way to preserve investment in legacy applications by enabling an easy migration path to accelerated computing.
HMPP Workbench is a compiler developed by CAPS enterprise and is available as a core tool on the TSUBAME 2.0 Cluster, the 5th largest supercomputer in the world. Cluster users can leverage OpenACC support in HMPP 3.1 to gain best of breed cross platform multicore acceleration, while coding to a common standard. The support of JCC Gimmick, CAPS partner in Japan, has been decisive to deliver HMPP Workbench to TSUBAME 2.0 users.
Thanks to Tokyo Institute of Technology contribution to HMPP Competence Centers, a beta test program was set up on OpenACC to explore its use on new applications. “We are very pleased to have Tokyo Institute of Technology on board of the HMPP competence center. Its highly recognized expertise will help to study efficient use of OpenACC in HPC applications” says François Bodin, CAPS CTO.
“After having launched HMPP in the Japanese market 3 years ago, we are now very excited to know that the HMPP version supporting OpenACC is going to be deployed for experimentation on TSUBAME 2.0, a reference for GPGPU in Japan. We do believe that this will, among other things, allow creating new success stories in a prosperous eco-system” says Yukiharu Watanabe, General Manager of JCC-Gimmick.
“The number of TSUBAME GPU users increases constantly however they are advanced users writing their codes in CUDA or OpenCL. OpenACC will strongly encourage the users having conventional codes to use GPU on TSUBAME” says Takayuki Aoki, professor of Global Scientific Information and Computing Center, Tokyo Institute of Technology.
About CAPS enterprise
CAPS entreprise is a leading provider of solutions for deploying applications on many-core systems. Its source-to-source HMPP™ compiler is based on C, C++, and FORTRAN directives and supports OpenACC® and OpenHMPP standards. The compiler incorporates a powerful NVIDIA® CUDA™ and OpenCL™ parallel data generator. With more than 10 years of scientific research experience and expertise, CAPS has many success stories in porting, optimizing and parallelizing codes in various areas: oil and gas, meteorology, biology, image processing and finance.
JCC-Gimmick Ltd. provides consulting expertise in the area of GPU computing using tools of HMPP created by CAPS entreprise. JCC-Gimmick can assist in applying GPGPU tools effectively to applications and optimizing code performance.
JCC-Gimmick's Website: www.jcc-gimmick.com
About Global Scientific Information Computing Center (GSIC), TOKYO INSTITUTE OF TECHNOLOGY
GSIC has started the operation of the TSUBAME 2.0 supercomputer, which is equipped with 4224 NVIDIA Tesla M2050 GPUs since November 2010. The peak performance is 2.4 PFLOPS and the 5th fastest supercomputer the Top 500 ranking.
Source: CAPS enterprise
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.