Visit additional Tabor Communication Publications
December 08, 2008
New industry standard unleashes the vast computing power of modern processors
SINGAPORE, Dec. 9 -- The Khronos Group today announced the ratification and public release of the OpenCL 1.0 specification, the first open, royalty-free standard for cross-platform, parallel programming of modern processors found in personal computers, servers and handheld/embedded devices. OpenCL (Open Computing Language) greatly improves speed and responsiveness for a wide spectrum of applications in numerous market categories from gaming and entertainment to scientific and medical software. Proposed six months ago as a draft specification by Apple, OpenCL has been developed and ratified by industry-leading companies including 3DLABS, Activision Blizzard, AMD, Apple, ARM, Barco, Broadcom, Codeplay, Electronic Arts, Ericsson, Freescale, HI, IBM, Intel Corporation, Imagination Technologies, Kestrel Institute, Motorola, Movidia, Nokia, NVIDIA, QNX, RapidMind, Samsung, Seaweed, TAKUMI, Texas Instruments and Umeå University. The OpenCL 1.0 specification and more details are available at http://www.khronos.org/opencl/.
"The opportunity to effectively unlock the capabilities of new generations of programmable compute and graphics processors drove the unprecedented level of cooperation to refine the initial proposal from Apple into the ratified OpenCL 1.0 specification," said Neil Trevett, chair of the OpenCL working group, president of the Khronos Group and vice president at NVIDIA. "As an open, cross-platform standard, OpenCL is a fundamental technology for next generation software development that will play a central role in the Khronos API ecosystem and we look forward to seeing implementations within the next year."
"We are excited about the industry-wide support for OpenCL," said Bertrand Serlet, Apple's senior vice president of Software Engineering. "Apple developed OpenCL so that any application in Snow Leopard, the next major version of Mac OS X, can harness an amazing amount of computing power previously available only to graphics applications."
OpenCL enables software developers to take full advantage of a diverse mix of multi-core CPUs, Graphics Processing Units (GPUs), Cell-type architectures and other parallel processors such as Digital Signal Processors (DSPs). OpenCL consists of an API for coordinating parallel computation and a programming language for specifying those computations. Specifically, the OpenCL standard defines:
Quotes from Working Group Members
Rick Bergman, senior vice president and general manager of the Graphics Products Group at AMD, said: "AMD believes that broad adoption of industry standards by hardware and software vendors is essential to successfully harnessing the power of stream computing in a wide array of mainstream applications. AMD has consistently supported an open, industry standards approach to stream computing, and is an aggressive proponent of the OpenCL standard. Now that OpenCL 1.0 is ratified, AMD plans to evolve its ATI Stream Software Development Kit to comply with the new specification to give developers, businesses and consumers maximum choice and flexibility in leveraging the computational capabilities of our graphics processors."
Andrew Richards, chief executive of Codeplay Software Limited, stated: "Codeplay is proud to have contributed to the definition and specification of the OpenCL 1.0 standard. OpenCL 1.0 will play a vital part in opening up the power of Manycore processors and GPUs to developers in many application sectors. This standard will help Codeplay to continue to innovate in production of programming tools for developers targeting the new heterogenous processor architectures, whilst maintaining interoperability with other elements in the development tool-chain. Codeplay plans to implement conformance with OpenCL 1.0 for its award-winning Sieve C++ Manycore Programming Platform during 2009."
Elliot Garbus, Intel vice president and general manager Visual Computing Software Division, said: "Over the years Intel has worked closely with the industry to innovate through open standards and is a long standing member of the Khronos board of promoters. With the introduction of OpenCL, we see new opportunities for developers to innovate through a task- and data-parallel programming environment that can benefit from the performance and flexibility of current and future Intel products."
Tony King-Smith, vice president of marketing at Imagination Technologies: "Imagination is delighted to have been involved in the authoring of OpenCL, which we see as a significant development for the future of GP-GPU based computing for multimedia."
Tony Tamasi, senior vice president of technical marketing at NVIDIA, stated: "OpenCL adds fuel to the most exciting parallel computational revolution of our generation 00 GPU Computing. It also provides another powerful way to harness the enormous processing capabilities of our CUDA-based GPUs on multiple platforms."
Michael McCool, founder and chief scientist at RapidMind, said: "As a provider of a high-level parallel programming platform, RapidMind is excited about the availability of a new standard for targeting compute devices through a single API. The low-level access to a variety of devices provided by OpenCL will allow our platform to expand to new devices more quickly than ever before."
OpenCL Briefing at SIGGRAPH ASIA
Representatives from Khronos and the OpenCL Working Group will be presenting an overview of the OpenCL specification at the Khronos Developer University at SIGGRAPH Asia in Singapore on Dec. 10, 2008. More details of this free event are available at http://www.khronos.org/news/events/detail/siggraph_asia_2008.
About The Khronos Group
The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics and dynamic media on a wide variety of platforms and devices. Khronos standards include OpenGL, OpenGL ES, OpenMAX, OpenVG, OpenKODE, and COLLADA. All Khronos members are able to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge media platforms and applications through early access to specification drafts and conformance tests. More information is available at www.khronos.org.
Source: The Khronos Group
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.