Startup Aims to Bring Parallel Applications to the Masses

By Nicole Hemsoth

November 16, 2011

There are a number of young companies at SC11 this week, debuting novel technologies. One of them, Advanced Cluster Systems (ACS), recently launched its first software product, with the rather bold name of Supercomputing Engine Technology (SET). It promises one of the Holy Grails of HPC: to turn sequential applications into parallel ones.

HPCwire got the opportunity to ask ACS founder and CEO Zvi Tannenbaum and, ACS CTO Dean Dauger about the product and the underlying technology.

HPCwire: How did Advanced Cluster Systems start out?

Zvi Tannenbaum: ACS was established in November 2004. At that time, ACS’s mission was to expedite scientific processes for a particular finance company utilizing Wolfram Research gridMathematica. While working on solutions, I realized how hard it is to do (Parallel Mathematica is grid-based, not supercomputing-based), and the great expense of software licenses, especially considering that the technology did not take advantage of all available resources (I.e., Mathematica utilized only half the cores available). I followed up with Wolfram, who explained that their software was not built to take advantage of multicore processors.
 
In late 2006, I contacted Dr. Dean Dauger, a world-renowned supercomputing expert who developed the easy-to-use patented Pooch clustering software, and explained to him my idea to attach Mathematica kernels to his Pooch clustering solution to give supercomputing-like all-to-all connectivity to Mathematica kernels. Dr. Dauger finished this project in one month, practically turning Mathematica into a supercomputing application, without changing a single line of its source code — Mathematica is a proprietary software.

Once we finished the project, Dr. Dauger and I realized that we came upon something much bigger than just parallelizing Mathematica. We took a modular sequential code like Mathematica and provided it with supercomputer-like parallelism. We then continued our development and created SET.
 
ACS recently acquired the patented Pooch clustering technology from Dr. Dauger, who has joined ACS as an owner. ACS today holds the rights to three patents: two regarding easy-to-use clustering — the Pooch technologies– and the third regarding the recently granted SET patent. And ACS has other patents still pending, so more on the way!
 
HPCwire: Dean, can you tell us what the Supercomputing Engine Technology does and how it works?

Dean Dauger: SET applies the parallel computing paradigm of distributed-memory MPI, proven over the last twenty years to achieve efficient parallelism from multicore to clusters to clouds and supercomputers. However it has three defining differences from MPI.

The first is that it provides a support architecture and framework that covers common parallel computing patterns. Beyond simply message-passing patterns, SET “owns” the data to be manipulated across the parallel computer so that SET can organize and rearrange the data as needed for the parallel computing pattern. SET supports parallel data structures, such as partitioning with guard cells and element management, and parallel execution patterns, such as divide-and-conquer array generation, common to many parallel codes, including MPI codes.

Because that support has never made it into MPI itself, every writer for MPI has had to rewrite the same parallel data structure and execution again and again. SET makes parallel code writing easier by writing that once and not requiring users to debug that part of the code.

The second defining difference from MPI is that it has the application organized into a “Front End” and “Back End”, with distinct purposes. The Front End is the “captain” of the application, directing the entire application and making global decisions, much like the main function or main loop of a code. The Back End does the grunt work, the raw and low-level calculations.

SET is the bridge between the Front End and the Back End, but that division allows SET to organize the work performed by the many Back End codes as appropriate for parallelism. In particular, SET runs many Back End codes simultaneously, allowing the writer of the Back End code, because by definition it simply does its own chunk work on its chunk of data, to not have to think about parallelism.

The third is that the result is a parallel computing approach that is much easier to use for application developers. As much as possible, the details of parallel computing is handled by SET, whether it be data or execution management across the cluster. The application-specific pieces are in the Front End, which defines the high-level execution of the parallel application, and the Back End, where the low-level calculations are actually performed.

I personally enjoy MPI, but I’ve encountered many with parallel computing needs that see MPI as too much like assembly language. We designed SET with the scope necessary to cover parallel computing details while enabling the application writer to think sequentially as much as possible.
 
HPCwire: What types of dependencies does SET have on the underlying platform — OS, software stack, hardware, and so on?

Dauger: Fundamentally, SET needs a parallel system with some equivalent of MPI_Irecv, MPI_Isend, and MPI_Test, plus the usual metrics of the system (rank and size). This makes it possible to port SET to shared-memory as well as standard MPI systems.

At present implementation of SET runs on all the major Unix-compatible platforms. We’ve run it on OS X and 64-bit Linux clusters as well as larger systems like SGI. As ACS’s resources allows, we will expand SET to other OS’s.
 
HPCwire: How would a sequential program need to be modified so that it could tap into the SET technology?

Dauger: The application would be organized into a “Front End” and “Back End”: The Front End is the “captain” of the application, directing the entire application and making global decisions, like the main loop I mentioned before. The Front End is also where the user-interface, if any, resides.

The Back End does the grunt work, the raw and low-level calculations. Any modern modular code should be able to be factored relatively easily this way, as it is an excellent and well-accepted approach for reusing code between projects.

HPCwire: How long would this typically take?

Dauger: Factoring the application into the Front End and Back End should be straightforward for a modular or other modern, well-organized application. After that one adds “glue code” between SET and the Front End and SET and the Back End, which typically consists of wrapper calls or minor replacements. Then there’s testing and optimization. Most projects using conventional approaches allows a year to accomplish this. With SET this can take under a month.

HPCwire: How well does the technology scale in the multicore, multiprocessor, and multi-server dimensions?

Dauger: Since the underlying paradigm is that of distributed-memory MPI, it scales almost as well as distributed-memory MPI on all parallel computing implementations. Where SET might do poorly is also where other parallel approaches do poorly, such when communication time is far greater than the computation size. The purpose of SET is to make it much easier for the software writer to quickly produce an application that can achieve scale.

HPCwire: Compared to a hand-coded MPI application, how well does SET perform?

Dauger: The SET approach has produced codes that scale almost as well as traditional hand-coded MPI applications. In some cases the results are indistinguishable from what is accomplished via MPI.
 
HPCwire: Has the SET technology been applied to any real-world codes?

Dauger: The first major proof-of-concept is SET’s application to Wolfram Research’s Mathematica. Mathematica is a very large application, millions of lines of code. The usual approaches would have taken a year, probably longer. Applying our SET to Mathematica took only one man-month, and yet it is able to scale far better than any other solution using Mathematica. That is now a product named Supercomputing Engine for Mathematica. Notably, because Mathematica is well modularized, we didn’t even need to look at Mathematica’s source code.
 
HPCwire: Zvi, what’s the company’s business model?

Tannenbaum: The primary sales strategy for Advanced Cluster Systems is to execute on a reseller channel program to leverage ACS by creating independent contractor relationships with value-added resellers — resellers or VARs — and solution providers. This will enable ACS to deploy a sales team with varied industry expertise, existing relationships with prospective customers and worldwide sales coverage without the fixed expense of hiring a direct sales force beyond a Director of sales/reseller channel manager.

HPCwire: What’s the next step for Advanced Cluster Systems?

Tannenbaum: ACS is a very small company and the next step is to cultivate its technology while remaining focused on our plans and avoiding pitfalls that can stop a tech-company’s growth in its tracks. In addition to continued development and executing plans to implement our Reseller Channel Program, we are working with major hardware companies to get SET exposure. We are currently working with an American VAR to distribute an upcoming SET-based enhancement to a scientific software product, and we are talking with cloud service providers.

We also have established our presence in Europe with Daresbury Labs and a British solution provider on site. We realize that a good reseller channel program must have a good marketing program to support it, and we are pursuing that course as well. We are also revising our business plan to prepare for external funding to achieve our goals and continue controlled and consistent growth.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with Altair CEO James Scapa, an HPCwire Person to Watch in 2021

May 14, 2021

Chairman, CEO and co-founder of Altair James R. Scapa closed several acquisitions for the company in 2020, including the purchase and integration of Univa and Ellexus. Scapa founded Altair more than 35 years ago with two Read more…

HLRS HPC Helps to Model Muscle Movements

May 13, 2021

The growing scale of HPC is allowing simulation of more and more complex systems at greater detail than ever before, particularly in the biological research spheres. Now, researchers at the University of Stuttgart are le Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst firm Hyperion Research at the HPC User Forum being held this we Read more…

AWS Solution Channel

Numerical weather prediction on AWS Graviton2

The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Behind the Met Office’s Procurement of a Billion-Dollar Microsoft System

May 13, 2021

The UK’s national weather service, the Met Office, caused shockwaves of curiosity a few weeks ago when it formally announced that its forthcoming billion-dollar supercomputer – expected to be the most powerful weather and climate-focused supercomputer in the world when it launches in 2022... Read more…

AMD, GlobalFoundries Commit to $1.6 Billion Wafer Supply Deal

May 13, 2021

AMD plans to purchase $1.6 billion worth of wafers from GlobalFoundries in the 2022 to 2024 timeframe, the chipmaker revealed today (May 13) in an SEC filing. In the face of global semiconductor shortages and record-high demand, AMD is renegotiating its Wafer Supply Agreement and bumping up capacity. Read more…

Hyperion Offers Snapshot of Quantum Computing Market

May 13, 2021

The nascent quantum computer (QC) market will grow 27 percent annually (CAGR) reaching $830 million in 2024 according to an update provided today by analyst fir Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire