With few application programmers well-versed in parallel programming, and with dual- and quad-core processors spreading to all corners of the computing ecosystem, the demand for ready-to-use parallelized software is only going to get larger. That’s why numerical libraries from a variety of vendors (e.g., Intel, NAG and Visual Numerics) now come with built-in parallelization.
The Mathworks is following the same path by integrating the company’s Parallel Computing Toolbox with two MATLAB optimization tool sets: the Optimization Toolbox and the Genetic Algorithm and Direct Search Toolbox. Both are used to develop optimal implementations of typical MATLAB programs — codes like engine design simulation or financial risk analysis.
The Parallel Computing Toolbox, which was originally launched as the Distributed Computing Toolbox in 2004, meets the application programmer half way to the parallel Promised Land. It extends MATLAB with new constructs such as the parallel for-loop (PARFOR), which allows the user to distribute code execution across multiple cores, multiple processors, or even a cluster. When executed on a single-core machine, PARFOR acts like a sequential for-loop. So the resulting code becomes portable across lots of different hardware setups, which not only allows you to run on different platforms, but also lets you share your software with family and friends.
The hard part is figuring out how to apply the parallel loops in the first place. By incorporating PARFOR-enabled code into the optimization solvers of the toolboxes themselves, the MathWorks engineers have done some of the heavy lifting in advance. Customers that are using the optimization solvers will automatically get the parallelized version when they pick up the next release. To get the speed-up benefit, the user just has to define the parallel resources they want to apply at execution time.
Users can explicitly switch off the built-in toolbox parallelization for a given session if they believe they can outdo the MATLAB programmers by parallelizing their own code. Theoretically, one could even mix parallelized user code with parallelized toolbox solvers, but according to Loren Dean, the director of engineering for MATLAB Products, that can be tricky.
The real goal here is to make code acceleration as transparent as possible without forcing users to sprinkle a lot of PARFORs throughout their programs. “Most of our users haven’t done parallel programming yet,” Dean told me. “This is a new area for them. So being able to fully leverage their multicore system or being able to leverage their cluster, without having to change their code, that’s the real value for them.”