Visit additional Tabor Communication Publications
August 21, 2008
On Wednesday, Intel introduced Parallel Studio, a suite of four tools to help application developers write parallel programs on multicore and manycore platforms. The suite is aimed at the millions of C and C++ programmers struggling to incorporate parallelism into their applications, and it does so not just by providing tools, but also by baking the expertise needed to use them effectively right into the toolset.
Intel is highly motivated to help programmers get the most out of its processors. The multicore change that Intel is educating programmers through at the moment is, as we've known in HPC for at least two decades, a fundamental change in the way developers think about programming. Engineering (and re-engineering) software for good performance on more than one processor is difficult work. Estimates of the number of software developers in the world vary widely, but most of them are between 1 million and 12 million. Even if the real number is toward the low end of that range, that's still a whole lot of parallel programming education.
In response to this challenge, Intel has invested in a wide range of ventures over the past year. It has funded two Universal Parallel Computing Research Centers (UPCRC) at Berkeley and UIUC to, as Marc Snir put it, "make 'parallel programming' synonymous with 'programming.' " They are also providing a raft of advanced training material to professors at universities all over the world to help them integrate the concepts of parallelism into fundamental computer science curricula, and Intel estimates that 40,000 undergraduate students will see that material this year. Recently, Intel announced that it had teamed up with HP and Yahoo! to create a cloud computing research test bed "designed to encourage research on the software, datacenter management and hardware issues associated with cloud computing at a larger scale than ever before."
The company also has had long-standing investments in compilers and programming tools for its chips, with products such as Cluster OpenMP, VTune Performance Analyzer, Trace Collector, MPI library, Math Kernel Library, and Fortran compilers. Recently, they've been adding parallel support to this portfolio, with tools like Threaded Building Blocks and a new language developed specifically to support multicore computing.
This week the company has taken what promises to be a dramatic leap forward. Intel Parallel Studio goes beyond just integrating Intel's compilers and debuggers into Microsoft's Visual Studio Integrated Development Environment (IDE). Parallel Studio's four components are aimed at helping developers throughout the entire lifecycle of an application: from planning where to put paralellism to ensuring that an application is behaving as expected. Open beta tests will start later this year, with product likely to ship sometime in the second half of 2009.
When completed, Parallel Studio will consist of four tools: Parallel Advisor, Parallel Composer, Parallel Inspector, and Parallel Amplifier. The components are aimed at the major stages of the development lifecycle for an application: planning, coding, debugging and tuning. Advisor is an interesting piece of technology, and with it Intel has decided to directly address the expertise gap faced by the majority of programmers already in the workforce with little or no parallel programming experience.
Intel's James Reinders, director of Intel Software Development Products, was careful to emphasize that Advisor is not an auto-parallelization tool or magic bullet. He explained Advisor's role to me in terms of the workflow that programmers often employ when adding parallelism to an existing code. Users often profile their application first to look for hotspots, and then do experiments in the most promising of those hotspots to determine if parallelism might pay off in that section of code. Unfortunately these experiments often either don't work out, or turn out to be development projects in their own right as developers run up against unforeseen problems, like globally shared structures that have to be modified to permit safe parallel access.
Advisor takes advantage of static and dynamic code analysis capabilities Intel has developed for its compilers and other projects to give the developer a heads up on these problems, and offers advice on ways to resolve them. In this way Advisor can help the developer determine high payoff areas to inject parallelism, and provide recommendations on a sound implementation. As Reinders points out, this is a totally new class of tool, and Intel wants to make sure they get it right. Although it stands at the beginning of the parallel application development lifecycle, its beta will be released last -- sometime in the first half of 2009 -- to permit Intel time to mature the offering.
Parallel Composer is where code gets written in Parallel Studio, and it builds directly upon Intel's existing code development tools. "Composer is the most mature part of Parallel Studio, which is vital," says Reinders. "At some level if we goof with some of the other tools we've just provided bad advice. But Composer is where users are creating real code, and there is no room for error there." Composer will include support for Microsoft's Concurrency Runtime when Microsoft releases the final version, expected next year. The Concurrency Runtime (CR) is meant to decrease the difficulties programmers encounter in properly adding threads and support for asynchronous events to applications. In addition to adding a simple object model to allow programmers to easily express complex thread coordination patterns, the CR maintains its own thread pool, eliminating the performance overhead normally associated with marshaling threads dynamically.
Parallel Inspector is the focal point for debug activities in Parallel Studio. It is based on Intel Thread Checker, and Reinders describes it as a "proactive bug finder." Inspector's unique benefit is that it will try to find problems that haven't yet manifested as bugs. For example, Inspector will search the application for data races, deadlocks, and other usage errors that often don't appear for long periods after release, or only show up in unpredictable ways.
The final component, Parallel Amplifier, builds upon a technology proof of concept that Intel posted at WhatIf.Intel.com some time ago. Intel's VTune is powerful but hard to use, and Reinders says that after posting the prototype Performance Tuning Utility, it quickly became the most popular download at the site, even for developers within Intel. Amplifier builds on this tool, so that users will again have the advantage of using technology that has already been field tested in the real world. It is designed for non-experts, and incorporates visualization to help users understand what's going on.
Perhaps the best part is that users of Parallel Studio don't have to subscribe to an all-or-nothing proposition. The tools work independently, and each works with the other compilers and tools from Microsoft, Intel, and others, so developers are free to move at their own pace, adopting specific Studio tools as needs dictate.
There is a lot of promise in these tools, and my first thoughts were about getting the technology out farther than "just" the Windows community. Parts of the suite, particularly from the Composer tool, will migrate into Intel's other tools, which are supported on other platforms (Linux and OS X). But Reinders pointed out that the other three tools are really IDE-centric, and that they need an environment in which to structure the advice they provide to developers. According to him, Intel has invested significant effort in tightly integrating Parallel Studio into Visual Studio because they didn't want developers to sense that they were doing something different from "regular" programming as they worked on parallel code. Reinders is optimistic that the technology may make its way into IDEs on other platforms, like Eclipse and XCode, but says that Intel doesn't have any plans to do so at this time.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.