September 22, 2010

Embedded Clouds: A Look Back at HPEC 2010

Craig Lund

High-performance computing (HPC) isn’t restricted to computer rooms. It is also found “embedded” within expensive gadgets. For example, it is at your local hospital inside the CAT and MR scanners. It is inspecting new semiconductors. It is inside defense RADAR and signals intelligence platforms. In fact, the market for embedded HPC is thought to be about the same size as the market for supercomputers.

Will cloud computing impact these embedded applications?  Evidence exists that clouds are indeed having an impact on applications that involve sensors or local data. This is documented in VDC Research’s survey of commercial, industrial, and defense applications titled “Scalable Edge Nodes: Cloud Services for Embedded Applications”. VDC’s results were previewed at the last week’s High Performance Embedded Computing (HPEC) workshop. VDC predicted the emergence of a tiered cloud where some computing is located near the data (instead of moving all data to distant servers).

The theme of last week’s HPEC workshop was: “custom clouds and GPU chips:  their impact on DoD applications”.

As the workshop progressed multiple definitions of the word “cloud” emerged. Several speakers described “cloud” as enabling pattern matching within large databases. At the opposite extreme, speakers called big microprocessors “clouds”. Finally the closing panel jumped off the technical tracks and defined “cloud” as a new business model.

One cloud application not discussed at HPEC was “utility computing” which is this community’s name for services like Amazon’s EC2 which share servers among users. Sharing is difficult when classified data is involved.

Speakers using the pattern matching definition noted that clouds are what Google and Facebook use to mine their own websites for patterns that attract advertisers. Of course the government’s interest is different: searching through things like  email and telephone intercepts and then automatically pointing analysts at potential terrorists. Multiple HPEC presentations used this very example while describing the underlying middleware and database. On the hardware side, HPEC’s “data intensive” platform talks ranged from an introduction to a new supercomputer at SDSC to a description of a custom 3D-Graph microprocessor from Lincoln Labs.

Yes, marketers are now labeling single chips as clouds. Intel may have started this trend back in December when it announced a 48 core research chip as a “single chip cloud computer”.  Now it seems every chip with many cores or many threads calls itself a cloud. At HPEC we saw that spin from Tilera and others.

The workshop ended with a panel of experts answering audience questions relating to the conference theme. This year Raytheon’s Niraj Srivastava responded to the very first question by describing clouds as a new kind of outsourcing. He used DISA’s RACE procurement won by HP as an example. The panel’s conversation never returned to the technical domain. Instead panel members explored service level agreements and similar concepts driving the enterprise world.

The complete HPEC theme included a focus on GPU computing as the this community is open to deploying accelerators. Many HPEC papers described science projects using GPU chips. The general impression was that GPUs speed up some algorithms but overall performance suffers from the overhead of copying data both into and out of the GPU.  Multiple speakers also complained that fast GPU code is not portable. Finally, compute-enabled GPU chips consume lots of power and thus become difficult “point heat sources” within embedded systems. Nevertheless, speakers mentioned that at least two embedded vendors, Mercury Computer and GE Intelligent Platforms, have added GPU acceleration into their air-cooled product lines.

HPEC stands for High Performance Embedded Computing. The HPEC workshop is in its 14th year and always held at MIT Lincoln Laboratory near Boston.  The two day event attracts roughly 200 people and is probably best known as the venue where DARPA traditionally announces new high performance computing programs. However, no programs were announced this year although the keynote speaker was outgoing DARPA TCTO office director Dr. Peter Lee. Instead Lee described the recent merger of TCTO with IPTO into a new office named I2O. He hinted that researchers should expect new programs in about six months, probably focused on exploitation and cyber security. Lee is departing DARPA to become the Managing Director of Microsoft Research Redmond.

Do the VDC survey results feel correct after listening to the entire conference? No speaker talked about a tiered cloud. No speaker debated the wisdom of moving data far away for analysis. These topics were left open, probably considered implementation details.  The focus at HPEC was higher level — what new capabilities can cloud technology bring that are not currently available to analysts and “warfighters”? Who will make research dollars available to make it happen?  It appears those dollars are flowing and future HPEC conferences will continue to explore the intersection of clouds with embedded applications.

The HPEC workshop’s website is Slides used by workshop speakers will eventually appear on this site. Slides from past workshops are already there. The next HPEC is September 20-22, 2011.

About the Author

Craig Lund has a long record of successfully driving high-performance computing into profitable niche markets. That happened at Mercury Computer where he was CTO and helped pioneer adoption within medical imaging, inspection, and defense. Before Mercury, Craig led successful  HPC business thrusts into large-scale decision support and real-time control.

Craig is currently consulting for a collection of defense primes, HPC vendors, and semiconductor firms. He also writes for HPC in the Cloud. You can reach him using clund ATSIGN localk DOTcom

Share This