Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch

By John Russell

July 15, 2020

Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle forward demonstrating an event-driven, visual-tactile perception system that uses Intel’s Loihi chip to control a robotic arm combining tactile sensing and vision. Noteworthy, they also ran the exercise on a GPU system and reported the Loihi-based the system performed slightly better and at much lower power.

NUS researchers presented their results today at the virtual Robotics Science and Systems conference being held this week. The combination of tactile sensing (grip) with vision (location) is expected to significantly enhance Robotic arm precision and delicacy of grip when handling objects. The use of neuromorphic technology also promises progress in efforts to reduce the power consumption required for robotics which is a central goal for neuromorphic technology.

This novel robotic system developed by NUS researchers comprises an artificial brain system that mimics biological neural networks, which can be run on a power-efficient neuromorphic processor such as Intel’s Loihi chip, and is integrated with artificial skin and vision sensors. Credit: National University of Singapore

“We’re excited by these results. They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step toward building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations,” said Harold Soh a NUS professor and author on a paper describing the work (Event-Driven Visual-Tactile Sensing and Learning for Robots).

Intel has long been at the forefront of efforts to commercialize neuromorphic technology, and its Loihi  (chip)/Pohoiki (system) is among the most developed platforms. Neuromorphic systems mimic natural systems such as the brain in that they use spiking neural networks (SNN) to process information instead of the artificial neural networks (ANN) more commonly used in machine/deep learning.

Mike Davies, director of Intel’s Neuromorphic Computing Lab, said, “This research from National University of Singapore provides a compelling glimpse to the future of robotics where information is both sensed and processed in an event-driven manner combining multiple modalities. The work adds to a growing body of results showing that neuromorphic computing can deliver significant gains in latency and power consumption once the entire system is re-engineered in an event-based paradigm spanning sensors, data formats, algorithms, and hardware architecture.” Intel also posted an account of the work.

This excerpt from the NUS paper nicely describes the challenge and contribution:

“Many everyday tasks require multiple sensory modalities to perform successfully. For example, consider fetching a carton of soymilk from the fridge humans use vision to locate the carton and can infer from a simple grasp how much liquid the carton contains. They can then use their sense of sight and touch to lift the object without letting it slip. These actions (and inferences) are performed robustly using a power-efficient neural substrate—compared to the multi-modal deep neural networks used in current artificial systems, human brains require far less energy.

“In this work, we take crucial steps towards efficient visual-tactile perception for robotic systems. We gain inspiration from biological systems, which are asynchronous and event- driven. In contrast to resource-hungry deep learning methods, event-driven perception forms an alternative approach that promises power-efficiency and low-latency—features that are ideal for real-time mobile robots. However, event-driven systems remain under-developed relative to standard synchronous perception methods.”

The value of multi-modal sensing has long been recognized as an important component for advancing robotics. However, limitations in the use of spiking neural networks have impeded the use of neuromorphic chips in real-time sensing functions.

“Event-based sensors have been successfully used in conjunction with deep learning techniques. The binary events are first converted into real-valued tensors, which are processed downstream by deep ANNs (artificial neural networks). This approach generally yields good models (e.g., for motion segmentation, optical flow estimation, and car steering prediction, but at high compute cost,” write the researchers

“Neuromorphic learning, specifically Spiking Neural Networks (SNNs), provide a competing approach for learning with event data. Similar to event-based sensors, SNNs work directly with discrete spikes and hence, possess similar characteristics, i.e., low latency, high temporal resolution and low power consumption. Historically, SNNs have been hampered by the lack of a good training procedure. Gradient-based methods such as backpropagation were not available because spikes are non-differentiable. Recent developments in effective SNN training, and the nascent availability of neuromorphic hardware (e.g., IBM TrueNorth and Intel Loihi) have renewed interest in neuromorphic learning for various applications, including robotics. SNNs do not yet consistently outperform their deep ANN cousins on pseudo-event image datasets, and the research community is actively exploring better training methods for real event-data.”

Another obstacle was simply developing adequate tactile sensing devices. “Although there are numerous applications for tactile sensors (e.g., minimal invasive surgery and smart prosthetics), tactile sensing technology lags behind vision. In particular, current tactile sensors remain difficult to scale and integrate with robot platforms. The reasons are twofold: first, many tactile sensors are interfaced via time-divisional multiple access (TDMA), where individual taxels are periodically and sequentially sampled. The serial readout nature of TDMA inherently leads to an increase of readout latency as the number of taxels in the sensor is increased. Second, high spatial localization accuracy is typically achieved by adding more taxels in the sensor; this invariably leads to more wiring, which complicates integration of the skin onto robot end- effectors and surfaces,” according to the paper.

The researchers developed their own a novel “neuro-inspired” tactile sensor (NeuTouch): “The structure of NeuTouch is akin to a human fingertip: it comprises “skin”, and “bone”, and has a physical dimension of 37×21×13 mm. This design facilitates integration with anthropomorphic end-effectors (for prosthetics or humanoid robots) and standard multi-finger grippers; in our experiments, we use NeuTouch with a Robotiq 2F-140 gripper. We focused on a fingertip design in this paper, but alternative structures can be developed to suit different applications,” wrote the researchers.

NeuTouch’s tactile sensing is achieved via a layer of electrodes with 39 taxels and a graphene-based piezoresistive thin film. The taxels are elliptically-shaped to resemble the human fingertip’s fast-adapting (FA) mechano-receptors, and are radially-arranged with density varied from high to low, from the center to the periphery of the sensor.

“During typical grasps, NeuTouch (with its convex surface) tends to make initial contact with objects at its central region where the taxel density is the highest. Correspondingly, rich tactile data can be captured in the earlier phase of tactile sensing, which may help accelerate inference (e.g., for early classification). The graphene-based pressure transducer forms an effective tactile sensor, due to its high Young’s modulus, which helps to reduce the transducer’s hysteresis and response time,” report the researchers.

The primary goal, say the researchers, was to determine if their multi-modal system was effective at detecting differences in objects that were difficult to isolate using a single sensor, and whether the weight spike-count loss resulted in better early classification performance. “Note that our objective was not to derive the best possible classifier; indeed, we did not include proprioceptive data which would likely have improved results, nor conduct an exhaustive (and computationally expensive) search for the best architecture. Rather, we sought to understand the potential benefits of using both visual and tactile spiking data in a reasonable setup.”

They used four different containers: a coffee can, Pepsi bottle, cardboard soy milk carton, and metal tuna can. The robot was used to grasp and lift each object 15 times and classify the object and determine its weight. The multi-modal SNN model achieved the highest score (81percent) which was about ten percent better than any of the single mode tests.

In terms of comparing the Loihi neuromorphic chip with the GPU (Nvidia GeForce RTX 2080), their overall performance was broadly similar but the Loihi-based system used far less power (see table). The latest work is significant step forward.

It’s best to read the full paper but here is an overview of the experiment taken from the paper.

  • Robot Motion. The robot would grasp and lift each object class fifteen times, yielding 15 samples per class. Trajectories for each part of the motion was computed using the MoveIt Cartesian Pose Controller. Briefly, the robot gripper was initialized 10cm above each object’s designated grasp point. The end-effector was then moved to the grasp position (2 seconds) and the gripper was closed using the Robotiq grasp controller (4 seconds). The gripper then lifted the object by 5cm (2 seconds) and held it for 0.5 seconds.
  • Data Pre-processing. For both modalities, we selected data from the grasping, lifting and holding phases (corresponding to the 2.0s to 8.5s window in Figure 4), and set a bin duration of 0.02s (325 bins) and a binning threshold value Smin = 1. We used stratified K-folds to create 5 splits; each split contained 240 training and 60 test examples with equal class distribution.
  • Classification Models. We compared the SNNs against conventional deep learning, specifically Multi-layer Perceptrons (MLPs) with Gated Recurrent Units (GRUs) [54] and 3D convolutional neural networks (CNN-3D) [55]. We trained each model using (i) the tactile data only, (ii) the visual data only, and (iii) the combined visual-tactile data. Note that the SNN model on the combined data corresponds to the VT-SNN. When training on a single modality, we use Visual or Tactile SNN as appropriate. We implemented all the models using PyTorch.

Link to paper: http://www.roboticsproceedings.org/rss16/p020.pdf

Link to Intel release: https://newsroom.intel.com/news/singapore-researchers-neuromorphic-computing-robots-feel/#gs.av7uff

Link to video: https://www.youtube.com/watch?time_continue=19&v=tmDjoSIYtsY&feature=emb_logo

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit technologies), the quantum computing landscape is transforming Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire