Company Offers a New Way to Parallelize Applications

By Michael Feldman

September 30, 2010

Startups always begin with big ambitions and Massively Parallel Technologies (MPT) is no exception. This week, the company unveiled “Blue Cheetah,” which is described as a “total application ecosystem” that aims to revolutionize the traditional software development and distribution model, especially for highly parallel codes. The idea is to turbo-charge the ROI model by automating the application development process, bringing software to market faster, and making it widely reusable.

MPT is not exactly a startup, though. The Colorado-based company has been around since 2000, and is on its third CEO. The first two, company founder Scott Smith and HPC luminary John Gustafson, are now on the board of directors. Former Linux Networx exec Bobbi Hazard is the current CEO and will oversee the reboot of MPT as it rolls out the Blue Cheetah suite of tools. Kevin Howard, the CTO, who has been with the company since its inception, has driven the technical innovation behind the products, especially in regard to the parallelization techniques.

During the company’s early years, MPT offered BLAST-based bioinformatics products and services, based on the company’s “HOWARD” technology. Using some of that early work, as well as additional development performed under a DARPA HPCS program contract, the company developed a new parallel communication technology that Hazard claims is “a huge improvement over MPI and PVM.” According to her, more than half of their 400-plus patent filings are based on this area of the technology.

But Blue Cheetah extends far beyond its novel communication scheme. It encompasses the whole software lifecycle, from design, testing, and development to deployment, licensing and revenue distribution. As Hazard puts it: “Usually you get one set of products to do one thing and another set of products to do another,” says Hazard. “I don’t know of anywhere else where you can get them all together.”

Not all of this is available today, though. What MPT announced this week is a beta version of Blue Cheetah’s software development platform, called Cub. With it, developers can design and develop applications, as well as share code with others in a collaborative fashion. Cub also automatically generates application documentation based on the design. The output from this tool is an executable that can run in a uni-processor environment. If parallelization is desired, that is performed separately further down the toolchain.

The key to Blue Cheetah software development is teasing out an application’s fundamental process elements, which they call kernels, from the control functions. Basically, the idea is to separate the math from the program control logic. During design, the system translates the developer’s kernel specifications directly into executable code. This is not revolutionary in itself; pseudcode-to-code translation has been done with varying degrees of success for decades. In this case though, there is no intermediary programming language like C or Fortran to deal with. The design itself represents the program source.

The control functions encompass the if-then-else and looping constructs that wrap around the kernel invocations. Conveniently, Cub automatically generate all the control functionality itself, again, based on the original design. This functional decomposition not only frees the developer from maintaining any of the control software, it also removes the dependency of the kernel algorithms on the underlying hardware and subsequent parallelization schemes.

Another important side effect to this decomposition is that the algorithms are easier to share among applications. Code reuse is a core element of MPT’s software monetization scheme, and during the design phase, the system points the developer to existing kernels that may apply to his or her application. Matches are based on keyword searches, input/output parameters, dataset similarities, and so on. Anything from an individual FFT algorithm to a complete application library can be shared across applications, taking with it the licensing agreement associated with the original code. The choice of reusing existing kernels versus designing new ones is up to the developer, though.

Once the application design is complete, its licensing is set up. The developer determines the fee structure and sets up the payment scheme for their own code. Blue Cheetah offers both a pay-per-use model and a more traditional licensing model. The intention is to offer applications on-demand via their own 256-node “cloud” cluster, but MPT will also license Blue Cheetah to customers who want to take the whole system in-house.

If kernels are reused in multiple applications, the original developers will get paid for each instance of use. It’s essentially the opposite of the open source licensing model. MPT is hoping to attract both commercial and academic developers, especially those frustrated by the “free software” business model. “A whole ecosystem will be formulated over time, and get larger as more kernels and algorithms become available,” says Hazard.

Blue Cheetah includes special capabilities to help developers parallelize their applications across the multicore/multiprocessor/cluster/grid/cloud computing landscape. That certainly covers high performance computing, but all software applications that require a large-scale computing infrastructure (e.g., cloud computing, business analytics, math-intensive applications, etc.) are fair game. Hazard says they have had early interest from organizations who develop nanotech, biotech and multi-player gaming applications.

Parallelizing applications will be performed by the upcoming Blue Cheetah product called Coalition, which is scheduled for release in the January 2011 timeframe. The tool will take the code developed under Cub and automatically restructure it in such a way as to maximize parallelism, be it for multicore platforms or clusters. How it actually accomplishes this is not clear, although an auto-parallelization feature that bypasses MPI and promises better performance should pique the interest of HPC developers.

Further down the road, a separate product called Savannah will also be available to put a Blue Cheetah app into firmware. This is targeted at users who want maximum performance or are running the types of embedded applications that requires the application to be executed locally.

Another future Blue Cheetah tool, called Spots, will consume existing source code and perform process-control decomposition so that it can be fed into the Cub platform. Once in the system, the application can go through the rest of the Blue Cheetah toolchain, including auto-parallelization. How this code transformation occurs, and what types of source code are deemed consumable, is not defined, but Hazard implied that even legacy MPI codes could be restructured by Spots. This tool also checks incoming code for malware and plagiarism.

Redefining the software development ecosystem is certainly a lot for one small company to take on. MPT has no venture capital money behind it. But the company has attracted a large number of angel investors to fund the Blue Cheetah development.

They’ve also managed to catch the attention of Gene Amdahl, a computer science icon who developed Amdahl’s law of software parallelism. He is on MPT’s board of advisors and appears to be thoroughly impressed by the Blue Cheetah products. In a video on MPT’s website, he talks about the importance of parallel computing and the opportunity afforded by the company’s technology. “It will revolutionize the world of computing,” he says.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

“Lunch & Learn” to Explore the Growing Applications of Genomic Analytics

In the digital age of medicine, healthcare providers are rapidly transforming their approach to patient care. Traditional technologies are no longer sufficient to process vast quantities of medical data (including patient histories, treatment plans, diagnostic reports, and more), challenging organizations to invest in a new style of IT to enable faster and higher-quality care. Read more…

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan and will begin operation in fiscal 2018 (starts in April). A Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This