Visit additional Tabor Communication Publications
August 11, 2011
Indiana-based MNB Technologies is a small company with big aspirations. The soon-to-be-public corporation is developing an expert-system based development suite designed to greatly simplify the programming of HPC accelerators, in particular FPGAs and GPU. To that end, the company recently announced the beta availability of its flagship product, hprcARCHITECT.
In essence, hprcARCHITECT replaces the grunt work performed by technical programmers to glue the low-level FPGA and/or GPU code to the higher-level application code. The tool offers a visual interface for application developers to design programs independent of hardware concerns. In essence, hprcARCHITECT takes the high-level design and applies it against a software repository of kernels, low-level routines, algorithms and code fragments to build the application.
According the Nick Granny, MNB's chief technology officer, the rationale for the tool is based on the fact that there are different workflows taking place during application development and they need to be approached differently. The first workflow is the development of the application architecture itself, which requires intimate knowledge of the engineering or science behind that application and a lot of creativity. That has to be performed by a real live person, in this case a domain expert.
The second workflow has to do with creating the low-level algorithms, like FFTs and Smith-Waterman routines, which require hardware expertise to extract the optimal performance. That's creative too, Granny says, but the algorithms only need to be developed once. After they're written, they can be shared across many applications via a software library or repository.
The final workflow is bundling the software pieces together into the application. Granny says they came to the realization that given a pre-existing repository, the bundling workflow could be automated with an intelligent programming design tool. "All of a sudden we had this a-ha moment," he says.
The impetus behind hprcARCHITECT came about a few years ago after the US Air Force solicited a proposal for FPGA algorithms to be used for reconfigurable computing. Granny says they responded not be offering a library, but by throwing out the conventional HPC development process and offering a expert systems-based framework in its place. MNB got the work and delivered its first prototype to the Air Force at the end of February.
In a nutshell, the methodology of hprcARCHITECT is to capture the knowledge of the application architecture in plain English (or French, German, or whatever). This is achieved through a graphical interface consisting a virtual whiteboard and sticky notes in which the designer creates a high-level description of the application. This includes the processes and algorithms to be used as well as the rules, facts, and assertions that define their use. With that in hand, the designer then specifies the computational hardware (specific GPUs and/or FPGAs) and the target system (circuit boards, interconnects, nodes and so on) on which the application will run.
The expert system then maps that description to the available algorithms contained in a repository and glues the application together. The repository is more than just a library of algorithms though. It also consists of a software store (known as the Marketplace), where contributors can submit software components -- either open source versions or proprietary one for profit -- which can subsequently be accessed by other users. The repository is also where MNB tools, like hprcARCHITECT, can be purchased. Transactions are done via Google Checkout.
Getting a critical mass of useful algorithms for GPUs and FPGAs is key to MNB's success. In cases where algorithms specified by the application design are not available in the repository, the application designers will be forced to implement these components themselves or contract out for their development.
In general, repository users will pay a token fee for open source code or a buy license for those algorithms contributed in the for-profit model. The cost is determined by the individual contributor, with MNB taking a small commission. Private repositories, developed for use within a specific organization, can also be set up, but don't include the Marketplace feature.
The initial MNB public repository is the result of the Air Force work, but the company is hoping a little cottage industry will develop where developers will submit their work -- free or otherwise -- to expand the breadth of algorithms available. Active contributors get access to the repository for free. If they quit being active, then they'll need to start paying.
The idea of a software repository is certainly not new. A number of HPC vendors offer GPU and FPGA libraries for sale. There are also public libraries available, like netlib.org, a DOE-funded repository of open source routines for science and engineering. High-level development frameworks are available, as well, for both GPUs and FPGAs. What MNB brings to the table is the combination of these components into a single integrated environment.
Although their first customer was in the federal government, the company is aiming the product primarily at HPC users outside the big national labs and R&D centers, in particular, at commercial HPC users, who are buying small or modest-sized systems accelerated with GPUs and FPGAs. Typically these will be sub-$50K machines sitting besides someone's desk, but with enough computational horsepower to do some serious number crunching. Pharmaceutical firms using HPC for drug discovery or banks doing portfolio risk analysis are two types of organizations making good use of this new breed of accelerated machines.
In general, these types organization don't have the technical computing talent to deal with exotic hardware like GPUs and FPGAs. The learning curve of programming in Verilog or even CUDA is enough to scare many small organizations away from HPC accelerators. MNB is hoping their turnkey development suite will look attractive to such customers.
To get the product off the ground, MNB is using about $1.5 million in combined funding from the Air Force, Navy, and the State of Indiana 21st Century Research & Technology. The main effort now is being directed at building up the repository. While there are plenty of open source GPU libraries to tap, robust FPGA routines are much harder to come by. "In the open source world of FPGAs, you pretty much get what you pay for," says Granny.
Currently, he is in discussion with a number of FPGA firms that are interested in getting their libraries supported by MNB. Software components implemented for conventional HPC, i.e., CPU-based, are possible too, given the hardware-independent nature of the framework. "If somebody thinks they have is a market for it, I'll put it in the repository," says Granny.
Impulse Accelerated Technologies, an FPGA tool provider, is evaluating the MNB suite for integration with its Impulse-C code generator, a model MNB hopes to generalize with other software tool makers. In general though, the company expects to offer hprcARCHITECT, the repository, and their associated toolset via direct sales, but mostly through VARs.
Granny says the technology is currently being evaluated at "one of the largest privately-funded R&D centers in the country." He expects the product to be generally available within the next few months.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.