Visit additional Tabor Communication Publications
November 15, 2007
Propelled by its flagship MATLAB product, The MathWorks is one of the most important suppliers of software tools for the technical computing community. The MATLAB (MATrix LABoratory) language has become the preeminent interactive programming environment for scientists and engineers. Over a million customers are using it to develop technical computing applications around the world.
Recently, we got the opportunity to speak with The MathWorks co-founder and chief scientist Cleve Moler about the evolution of MATLAB and about how the language grew to include parallel programming support. The history of the MATLAB product spans over two decades.
In the early 1980s, Jack Little defined the business plan for the company, to be known as The MathWorks, while Moler was the brains behind MATLAB. Both Moler and Little, along with Steve Banger, developed the initial commercial product, MATLAB 1.0, which was launched in 1984, the same year The MathWorks was founded. Moler modestly refers to Little as the "heart and soul of the company," but it is MATLAB that has become the company's icon.
"I wrote MATLAB years ago, so I wouldn't have to go down to the computer center after dinner to pick up my output," said Moler. "It was important that MATLAB be interactive. That was more important to me than it was for it to be fast. I was worried about my time, not the computer's time."
But in the world of high performance technical computing, fast execution is also a big priority. When you start running applications that take weeks to complete on your PC, interactivity doesn't mean much. These users want to tap parallel computing and still have an interactive workflow for even their largest data sets.
Although MATLAB started out as a single-threaded, shared memory programming language for the PC, even in those early days people were experimenting with it as a platform for parallel computing. In the 1980s, a few engineers were starting to use the INMOS transputer as a math accelerator. Moler said a South African company wrote a set of library functions for the transputer, and these became add-ons to MATLAB. In the late 80s, Moler himself played around with MATLAB on the Intel HyperCube, an early implementation of a parallel computer. But in these cases, MATLAB was used as the front-end for the parallel computations on the parallel computer. The language environment did not run natively on these architectures.
For the next decade, that became a recurring theme that stood in the way of extending MATLAB into parallel computing. In 1995, Moler wrote "Why There Isn't a Parallel MATLAB," in which he described the three major obstacles at the time: the memory model, granularity, and the business situation.
The conflict between MATLAB's global memory model and the distributed model of most parallel systems meant that the large data matrices had to be sent back and forth between the host and the parallel computer. "It took far longer to distribute the data than it did to do the computation," wrote Moler at the time. "Any matrix that would fit into memory on the host was too small to make effective use of the parallel computer itself."
And on the shared memory machines of that era, it would have been difficult to implement the kind of multithreaded parallelism that would have made the design changes in the product worthwhile.
The other major problem was that early parallel computers were not built to be user-friendly. In the 1980s and 1990s, mainstream computing was still relying on Moore's law to drive performance increases with single-core, single processor systems. The multiprocessor systems of the day were the mainframes and supercomputers, and as Moler noted, the people who owned these system didn't buy any software; they developed it themselves. In most cases, these machines did not offer interactivity, which negates one of the main features of MATLAB.
That was more or less the case up until several years ago when cluster computers entered the mainstream. These clusters could be purchased by much smaller organizations -- engineering firms, chemistry labs, finance services departments and other groups who could benefit from the collective computational power. In anticipation of this, Moler said he began working on a parallel version of MATLAB about five years ago.
"We could see that there was a potential business there," said Moler. "And MATLAB had evolved to the point where it wasn't just a matrix laboratory anymore. It was doing a lot of other things, so there was a possibility of doing more coarse-grained parallelism, which would be appropriate for distributed memory."
This was before processors moved to multicore architectures. The first clusters were being built with single-core processors, but the simple multiplication of computing resources was still a revolution. MATLAB's initial implementation of parallel computing defined a way for developers to explicitly specify distributed objects and computations. Two constructs were added: PARFOR (parallel for loops) and distributed arrays (large matrices distributed across separate memories). This is the basis for the Distributed Computing Toolbox, which was launched in 2004. The implementation has MATLAB running on multiple nodes of a cluster, with MPI under the hood.
The popularity of commodity clusters and the interactive computing experience is also what attracted Alan Edelman, chief science officer at Interactive Supercomputing (ISC), to use MATLAB as the basis for his company's parallel computing platform, Star-P. But ISC did what The MathWorks didn't want to do -- provide a server-client model for high performance computing with MATLAB only on the client side. ISC has since expanded to support both the R and Python environments. But in all cases, the back-end acceleration on the HPC servers is performed by ISC software.
Moler, who has known Edelman since he was a grad student, said people are often confused by the distinction between the parallel computing products offered by the two companies. He says ISC didn't change MATLAB, they just used it as a front-end. In The MathWorks' implementation of distributed computing, MATLAB runs natively on the cluster nodes. "That's harder, and that's why it's taken us longer," Moler pointed out. "[ISC] is not a parallel MATLAB, they're a conventional MATLAB attached to a parallel computer."
One of the big challenges for MATLAB is how to leave peacefully with the job scheduler in the cluster. Most schedulers batch jobs under some sort of priority scheme. Under this model, user interactivity is a hit or miss proposition. Some systems offer dedicated time if you limit your request of resources -- say 10 minutes and 8 processors. Otherwise, your job ends up in a batch queue. Depending upon how many other users there are, the results may or may not arrive quickly. A general solution may require more intimate integration between MATLAB and the job schedulers.
When the Distributed Computing Toolbox was envisioned, Moler was focused on cluster computing. But by the time the distributed product was launched in 2004, the dual-core Opteron was only year away. In the past few years, multicore processors have become the de facto processor architecture across almost every type of computer. But taking advantage of multiple cores requires a different approach, since it allows for shared memory multithreading, rather than just distributed memory multiprocessing. Expectations are different as well. Developers would like multicore to be supported transparently, at a level below the application software layer.
One way MATLAB does this is with implicit multithreading for matrix computation. But, according to Moler, the matrices have to be rather large before the multicore speedup is really effective. The product also uses multithreaded libraries that exploit multicore, such as ATLAS BLAS, and the Intel and AMD math libraries for matrix computations. But at this point, the matrix math components are just a portion of the entire product, Moler said.
"The question is how to do something more complicated that just matrices; and that's difficult," he admitted. Internally the engineers discussed adding multithreading constructs into the product. Some prototypes were built. But eventually they decided it was too complicated and didn't really work well with the MATLAB model. Ultimately, they just weren't convinced that explicit multithreading was the right way to go.
With accelerators like GPUs, the Cell processor, FPGAs and ClearSpeed boards coming on the scene, MATLAB users are starting to take advantage of the new parallel computing hardware. Thus far, the solutions have all involved library calls that can be invoked from MATLAB code. ClearSpeed and NVIDIA have demonstrated coprocessor acceleration from MATLAB code, and a number of people have experimented with mapping MATLAB code onto FPGAs. The limitation is that the code executed on the coprocessor needs to be substantial enough to make the overhead of sending the calculations off the host processor worthwhile.
From Moler's point of view, native MATLAB support for exotic parallel processors will have to wait until a standard programming model and commodity hardware makes this practical. Conceivably this could happen within the next few years with the upcoming Intel Larrabee and AMD Fusion processors. These architectures promise to integrate CPU and GPU architectures and make general-purpose parallel computing available within a client. If one or both of these architectures become widely adopted, it is likely to encourage a native MATLAB implementation.
While Moler didn't reveal any specific plans for these processors, there are a number of other developments on the drawing board. One of their projects is teaching their just-in-time compiler (JIT) to be multicore-aware. Currently the compiler only generates single-threaded code. Another area they're looking at is enhancing the way the PARFOR and distributed arrays work. After a few years of experience with this model, there may be ways to make these structures work even better. Moler also said they're looking at ways to parallelize the graphics capabilities of the product.
Like many software platform vendors, The MathWorks has come to the conclusion that parallel computing is their future, and the HPC community is a big part of that. It certainly wasn't always the case. Moler said the company didn't even come to the Supercomputing Conference until two years ago in Seattle at SC05, the year after the company introduced its Distributed Computing Toolbox. At that event, out of the 20 people from the company that attended, only Moler had been to the conference before. Last year at SC06 in Tampa, they had more people and a bigger booth, and this year at SC07 in Reno, they had their largest presence ever. With the technical computing community firmly entrenched in HPC, The MathWorks is likely to be a regular fixture at SC for some to come.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.