Visit additional Tabor Communication Publications
January 19, 2010
RENNES, France, Jan. 19 -- CAPS entreprise is accelerating its international deployment with a new distribution agreement with JCC-Gimmick, dedicated to the promotion and sales of its flagship product HMPP (Heterogeneous Multicore Parallel Programming) in Japan.
GPU computing is one of the key breaking news of year 2009. The use of hybrid systems, e.g., computer systems mixing CPUs and GPUs to accelerate applications has spawned from research to industry over the past two years. This year numerous testimonials, ranging from research centers to commercial and government organizations, have shown the speed up that can be gained through the use of these systems. Most of the leading system providers have included hybrid systems, from desk side supercomputers to very large systems, in their product portfolio.
CAPS entreprise provides end users of such systems with a breakthrough technology enabling to port legacy applications onto hybrid systems in a record time. An efficient and portable hybrid application is automatically produced from the source code of the application, annotated with directives or pragmas. CAPS entreprise's customers benefit from a pragmatic and elegant solution to the porting of their legacy applications.
JCC-Gimmick is a growing player in the area of supercomputing, providing expertise and training. The company extends its offering with the distribution of HMPP and related services. According to this agreement, JCC-Gimmick mission is to ensure HMPP promotion, commercialization, deployment and support on Japanese territory.
"In the development of CAPS entreprise in APAC, Japan is an important step. This agreement with JCC-Gimmick as a partner to promote HMPP on the Japanese market illustrates CAPS' international fast deployment strategy. I am glad to have JCC-Gimmick as a partner to promote our products," says Benoît Raoult, CAPS manager for APAC sales and partnerships. "This partnership creates value for the customers of HMPP in Japan, bringing an innovative and leading edge technology to end users, supported by the expertise in parallel programming of a growing reputation specialist like JCCGimmick."
"This new agreement gives us the opportunity to add to our HPC market offerings a major player in manycore programming," declares Y. Watanabe san, JCC-Gimmick executive general manager, "CAPS Software solutions fully complement the tools and services JCC-Gimmick provides. They offer new technology opportunities for our customers and complement our expertise."
About CAPS entreprise
CAPS entreprise gives to software developers an easy access to manycore systems. Its flagship product HMPP (Heterogeneous Multicore Parallel Programming) allows a single version of a given application to be developed, ported, maintained and deployed on several manycore systems like those integrating CPUs and GPUs. It unleashes the power of manycore architectures by providing an elegant and pragmatic solution to the porting of legacy software. Web site: www.caps-entreprise.com
JCC-Gimmick Ltd. provides consulting expertise in the area of GPU computing using tools of HMPP created by CAPS entreprise. JCC-Gimmick can assist in applying GPGPU tools effectively to applications and optimizing code performance. JCC-Gimmick has been fully supported with the backup of fundamental technology by Aoki Lab, Graduate School of Information Sciences of Tohoku University. Aoki Lab has developed a novel image matching technique using Phase-Only Correlation -- a technique for high-accuracy registration of 1D, 2D and 3D signals using phase information of discrete Fourier transform and applied the POC technique to a wide range of applications such as smart image sensors, super-resolution video signal processing, 3D machine vision, automotive image processing, biometrics authentication, medical image analysis, etc. JCC-Gimmick's Web site: www.jcc-gimmick.com. Aoki Lab of Tohoku University Web site: www.aoki.ecei.tohoku.ac.jp/index.html.
Source: CAPS entreprise
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.