KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

By John Russell

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. It’s nice to have patient capital, a rare commodity for startups these days. The company contends its KNUPATH Hermosa processor with 256 DSP cores and its Lambda fabric will bring performance, scalability, energy, and programmability advantages over CPUs, GPUS, and FPGAs to a wide swath of machine learning applications. The first commercial boards – code named Mavericks – are expected around March this year.

Founded in the 2005 timeframe by Daniel Goldin, the long time NASA administrator, KnuEdge has raised roughly $100M no doubt stemming from investor confidence in Goldin’s extensive technology creation and delivery history. Goldin and company believe their investors’ patience is about to start paying off. KnuEdge has two business units, KNUPATH focused on hardware accelerators based on Hermosa and Lambda technology, and KnuVerse, focused on voice and face recognition systems. The latter, said Steve Cumings, CMO, KnuEdge, has customers in the government sector. Company revenues are somewhat north of $20 million so far.

Broadly, KnuEdge’s view is that a highly scalable processor in a single socket is handicapped in addressing growing machine learning and large-scale computing challenges. In contrast, the company’s Lambda Fabric enables a large number of “KNUPATH Hermosa processors to be interconnected in low latency, high throughput mesh for massively parallel processing which is well suited for application needs that will drive the compute engines of the future.”

This isn’t exactly a new idea. The Hermosa chip and Lambda technology will enter the market amid a gush of machine learning technologies all striving to advance data-driven science and enterprise data analytics. Indeed the emergence of heterogeneous computing architectures relying on a variety of accelerator engines is a key feature of today’s computing landscape. Given Goldin’s remarkable achievements at NASA it should be interesting to watch KnuEdge’s progress.

Early developer boards with two Hermosa chips have been available for some time. Volume sales of individual chips are planned to begin in January followed by the Mavericks offering, a PCIe board with four Hermosa chips, towards the end of the quarter.

Presented as a “neural computing” approach, the KNUPATH architecture actually attempts to mimic nervous system communication more than brain-inspired spiky neuron ‘inference logic’ (discussed further below).

Patrick Patla, senior vice president and general manager of KNUPATH and a former AMD executive, said, “What’s unique about Hermosa’s 256 DSP cores is that they are hooked together at a central part of the processor with a router that has 16 ports. Using the Lambda fabric, it’s possible, at least theoretically, to scale to 500,000 Hermosa processors.

“We are a data flow machine. So you push data through the system and can have the calculation and different algorithms change on the fly. We are different than a GPU accelerator in that they use a SIMD architecture. We use multiple programs, multiple data, so on our 256 cores we could have 256 separate algorithms running. You would push data through those algorithms and then you have hits on the data at different hit rates based on the algorithms and you can tune and resend algorithms to those DSPs through packets,” explained Patla.

“Basically the packets that we send through the Lambda network are what allows the programming of the DSP, so packets deliver the program, the algorithm, and then bring the payload, and push the data through it. Not only are you getting all the data and the operating instructions with each packet, but each core also knows the next destination for that information so it’s extremely efficient.” One result is very low latency at various systems levels (see diagram below).

Patla also contrasted Hermosa’s ease of use with emerging brain-inspired neuromorphic chips such as IBM’s TrueNorth, which uses “spiking neuron” architecture.

“Spiky algorithms are notoriously difficult to program. Commonly they are trained on other networks first and then moved onto the neuromorphic chip so the actual software side of that is different,” he said.

As noted earlier the Hermosa-Lambda architecture emulates neuronal connectivity more than brain processing. “If you look at the different neuron-based approaches, our inspiration really gives you lots of little engines – that’s the background of the DSP cores, what we affectionately sometimes call tDSPs or tiny DSPs,” said Patla. Reliance on familiar DSP architecture eases programming.

“Our tools sit on a C/C++ library set on top of LLVM (compiler). And everybody is familiar with OpenCL as well as OpenMPI which is very comfortable in our architecture,” said Patla. The Hermosa/Lambda architecture also supports NUMA (non uniform memory access) and each processor has memory directly (72MB) on it. “Much of the advantage is the dataflow but also all the advantages of common programming techniques for anybody that has worked on OpenMPI. Many of the other [neuromorphic] architecture require a different set of tools.”

Hermosa Development Board

KnuEdge has had a software developer kit out for “quite some time” and it is already in the hands of many developers, according to Patla.

It all sounds great. In April KnuEdge will hold a Hermosa developers’ conference at UCSD as well as a “heterogeneous neural network conference” in partnership with UCSD for the development of next generation algorithms that can take advantage of new architectures such as Hermosa. Patla said performance benchmarks for chip will be forthcoming with the release of the commercial product; it seems like the developer conference would be a good place to do so, but he wouldn’t specify when beyond the first half of the year.

“Right now, as you would imagine, we are in the labs with our SDKs and final verification of those commercial systems as we are tuning and bringing all of our code to the processors. In the future we’ll show configurations of 4, 8, 12, 16, Hermosas together to show the scalability of the Lambda fabric. When Steve talked about mimicking the nervous systems it really is about our connectivity and the fact that when you add more Hermosas to the network, we continue to scale because with every socket you are adding more memory as well. Each processor has 72MB of onchip memory that is sufficient for the programming of our kinds of algorithms and the workload we are trying to tackle.”

Currently the chip is being fabbed by GLOBALFOUNDARIES on the 32nm process. “It’s a well behaved chip where these 256 cores and fabric and everything lives in a 35-watt part,” said Patla.

The KNUPATH folks believe Hermosa has the potential to meet a wide variety of machine-learning kinds of applications performed in heterogeneous computing environments as well as an opportunity to replace existing approaches to those applications.

‘We have a demo on the website that compares us to the most current NVIDIA card and we have a 2.5x performance. It is very interesting that a video card isn’t very good at video compression that we are good at because of the parallelism of communication we handle across the memory. So that’s one of the spaces we’ll be aiming at. And of course it will also find its way into many of the single board computer spaces because at 35 watts and the ability to do signal processing and such fine grained computing we actually expect it to replace many FPGAs in a lot of environments.”

Patla argues Hermosa/Lambda’s flexibility is a major benefit and door opener – one could divvy the chips up and have a multipurpose SOC instead of dedicating it to just one task. He used a video analysis application as an example of flexibility and reprogrammability.

“You can reprogram a core by just delivering a new packet. For example, if you were doing video analysis and were searching within videos, you could be looking for ball caps. You could have all the different algorithms looking at ball caps and you could just all of a sudden reprogram and divide the chip and have 25 percent of the chip looking for red ball caps and 25 percent looking for blue caps. You could flip to four different algorithms in nanoseconds. Then when you have high hit rates and you realize the one you are really looking for, and you could say OK now all care about our green ball caps and that algorithm would propagate against all the cores and you’d be able to take your throughput up. It’s very fast, very flexible,” he said.

At SC16, the KNUPATH team was busily evangelizing. Patla said they talked to a number of cloud providers as well as national labs that expressed interest to the point that he is expecting some new workloads to emerge.

There’s still much to do. Patla ticked off desirable milestones for 2017 – getting out of the lab, showcasing a couple of commercial customers and workloads, integrating the many machine learning frameworks, making sure Hermosa-based systems get into the cloud somewhere for development and production purposes, to name but a few.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with the enterprise strengths of its recent acquisitions, notably Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Preparing for Exascale Science on Day 1

October 14, 2020

Science simulation, visualization, data, and learning applications will greatly benefit from the massive computational resources available with future exascal Read more…

By Linda Barney

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This