Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch

By John Russell

July 15, 2020

Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle forward demonstrating an event-driven, visual-tactile perception system that uses Intel’s Loihi chip to control a robotic arm combining tactile sensing and vision. Noteworthy, they also ran the exercise on a GPU system and reported the Loihi-based the system performed slightly better and at much lower power.

NUS researchers presented their results today at the virtual Robotics Science and Systems conference being held this week. The combination of tactile sensing (grip) with vision (location) is expected to significantly enhance Robotic arm precision and delicacy of grip when handling objects. The use of neuromorphic technology also promises progress in efforts to reduce the power consumption required for robotics which is a central goal for neuromorphic technology.

This novel robotic system developed by NUS researchers comprises an artificial brain system that mimics biological neural networks, which can be run on a power-efficient neuromorphic processor such as Intel’s Loihi chip, and is integrated with artificial skin and vision sensors. Credit: National University of Singapore

“We’re excited by these results. They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step toward building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations,” said Harold Soh a NUS professor and author on a paper describing the work (Event-Driven Visual-Tactile Sensing and Learning for Robots).

Intel has long been at the forefront of efforts to commercialize neuromorphic technology, and its Loihi  (chip)/Pohoiki (system) is among the most developed platforms. Neuromorphic systems mimic natural systems such as the brain in that they use spiking neural networks (SNN) to process information instead of the artificial neural networks (ANN) more commonly used in machine/deep learning.

Mike Davies, director of Intel’s Neuromorphic Computing Lab, said, “This research from National University of Singapore provides a compelling glimpse to the future of robotics where information is both sensed and processed in an event-driven manner combining multiple modalities. The work adds to a growing body of results showing that neuromorphic computing can deliver significant gains in latency and power consumption once the entire system is re-engineered in an event-based paradigm spanning sensors, data formats, algorithms, and hardware architecture.” Intel also posted an account of the work.

This excerpt from the NUS paper nicely describes the challenge and contribution:

“Many everyday tasks require multiple sensory modalities to perform successfully. For example, consider fetching a carton of soymilk from the fridge humans use vision to locate the carton and can infer from a simple grasp how much liquid the carton contains. They can then use their sense of sight and touch to lift the object without letting it slip. These actions (and inferences) are performed robustly using a power-efficient neural substrate—compared to the multi-modal deep neural networks used in current artificial systems, human brains require far less energy.

“In this work, we take crucial steps towards efficient visual-tactile perception for robotic systems. We gain inspiration from biological systems, which are asynchronous and event- driven. In contrast to resource-hungry deep learning methods, event-driven perception forms an alternative approach that promises power-efficiency and low-latency—features that are ideal for real-time mobile robots. However, event-driven systems remain under-developed relative to standard synchronous perception methods.”

The value of multi-modal sensing has long been recognized as an important component for advancing robotics. However, limitations in the use of spiking neural networks have impeded the use of neuromorphic chips in real-time sensing functions.

“Event-based sensors have been successfully used in conjunction with deep learning techniques. The binary events are first converted into real-valued tensors, which are processed downstream by deep ANNs (artificial neural networks). This approach generally yields good models (e.g., for motion segmentation, optical flow estimation, and car steering prediction, but at high compute cost,” write the researchers

“Neuromorphic learning, specifically Spiking Neural Networks (SNNs), provide a competing approach for learning with event data. Similar to event-based sensors, SNNs work directly with discrete spikes and hence, possess similar characteristics, i.e., low latency, high temporal resolution and low power consumption. Historically, SNNs have been hampered by the lack of a good training procedure. Gradient-based methods such as backpropagation were not available because spikes are non-differentiable. Recent developments in effective SNN training, and the nascent availability of neuromorphic hardware (e.g., IBM TrueNorth and Intel Loihi) have renewed interest in neuromorphic learning for various applications, including robotics. SNNs do not yet consistently outperform their deep ANN cousins on pseudo-event image datasets, and the research community is actively exploring better training methods for real event-data.”

Another obstacle was simply developing adequate tactile sensing devices. “Although there are numerous applications for tactile sensors (e.g., minimal invasive surgery and smart prosthetics), tactile sensing technology lags behind vision. In particular, current tactile sensors remain difficult to scale and integrate with robot platforms. The reasons are twofold: first, many tactile sensors are interfaced via time-divisional multiple access (TDMA), where individual taxels are periodically and sequentially sampled. The serial readout nature of TDMA inherently leads to an increase of readout latency as the number of taxels in the sensor is increased. Second, high spatial localization accuracy is typically achieved by adding more taxels in the sensor; this invariably leads to more wiring, which complicates integration of the skin onto robot end- effectors and surfaces,” according to the paper.

The researchers developed their own a novel “neuro-inspired” tactile sensor (NeuTouch): “The structure of NeuTouch is akin to a human fingertip: it comprises “skin”, and “bone”, and has a physical dimension of 37×21×13 mm. This design facilitates integration with anthropomorphic end-effectors (for prosthetics or humanoid robots) and standard multi-finger grippers; in our experiments, we use NeuTouch with a Robotiq 2F-140 gripper. We focused on a fingertip design in this paper, but alternative structures can be developed to suit different applications,” wrote the researchers.

NeuTouch’s tactile sensing is achieved via a layer of electrodes with 39 taxels and a graphene-based piezoresistive thin film. The taxels are elliptically-shaped to resemble the human fingertip’s fast-adapting (FA) mechano-receptors, and are radially-arranged with density varied from high to low, from the center to the periphery of the sensor.

“During typical grasps, NeuTouch (with its convex surface) tends to make initial contact with objects at its central region where the taxel density is the highest. Correspondingly, rich tactile data can be captured in the earlier phase of tactile sensing, which may help accelerate inference (e.g., for early classification). The graphene-based pressure transducer forms an effective tactile sensor, due to its high Young’s modulus, which helps to reduce the transducer’s hysteresis and response time,” report the researchers.

The primary goal, say the researchers, was to determine if their multi-modal system was effective at detecting differences in objects that were difficult to isolate using a single sensor, and whether the weight spike-count loss resulted in better early classification performance. “Note that our objective was not to derive the best possible classifier; indeed, we did not include proprioceptive data which would likely have improved results, nor conduct an exhaustive (and computationally expensive) search for the best architecture. Rather, we sought to understand the potential benefits of using both visual and tactile spiking data in a reasonable setup.”

They used four different containers: a coffee can, Pepsi bottle, cardboard soy milk carton, and metal tuna can. The robot was used to grasp and lift each object 15 times and classify the object and determine its weight. The multi-modal SNN model achieved the highest score (81percent) which was about ten percent better than any of the single mode tests.

In terms of comparing the Loihi neuromorphic chip with the GPU (Nvidia GeForce RTX 2080), their overall performance was broadly similar but the Loihi-based system used far less power (see table). The latest work is significant step forward.

It’s best to read the full paper but here is an overview of the experiment taken from the paper.

  • Robot Motion. The robot would grasp and lift each object class fifteen times, yielding 15 samples per class. Trajectories for each part of the motion was computed using the MoveIt Cartesian Pose Controller. Briefly, the robot gripper was initialized 10cm above each object’s designated grasp point. The end-effector was then moved to the grasp position (2 seconds) and the gripper was closed using the Robotiq grasp controller (4 seconds). The gripper then lifted the object by 5cm (2 seconds) and held it for 0.5 seconds.
  • Data Pre-processing. For both modalities, we selected data from the grasping, lifting and holding phases (corresponding to the 2.0s to 8.5s window in Figure 4), and set a bin duration of 0.02s (325 bins) and a binning threshold value Smin = 1. We used stratified K-folds to create 5 splits; each split contained 240 training and 60 test examples with equal class distribution.
  • Classification Models. We compared the SNNs against conventional deep learning, specifically Multi-layer Perceptrons (MLPs) with Gated Recurrent Units (GRUs) [54] and 3D convolutional neural networks (CNN-3D) [55]. We trained each model using (i) the tactile data only, (ii) the visual data only, and (iii) the combined visual-tactile data. Note that the SNN model on the combined data corresponds to the VT-SNN. When training on a single modality, we use Visual or Tactile SNN as appropriate. We implemented all the models using PyTorch.

Link to paper: http://www.roboticsproceedings.org/rss16/p020.pdf

Link to Intel release: https://newsroom.intel.com/news/singapore-researchers-neuromorphic-computing-robots-feel/#gs.av7uff

Link to video: https://www.youtube.com/watch?time_continue=19&v=tmDjoSIYtsY&feature=emb_logo

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Red Hat’s Disruption of CentOS Unleashes Storm of Dissent

January 22, 2021

Five weeks after angering much of the CentOS Linux developer community by unveiling controversial changes to the no-cost CentOS operating system, Red Hat has unveiled alternatives for affected users that give them severa Read more…

By Todd R. Weiss

China Unveils First 7nm Chip: Big Island

January 22, 2021

Shanghai Tianshu Zhaoxin Semiconductor Co. is claiming China’s first 7-nanometer chip, described as a leading-edge, general-purpose cloud computing chip based on a proprietary GPU architecture. Dubbed “Big Island Read more…

By George Leopold

HiPEAC Keynote: In-Memory Computing Steps Closer to Practical Reality

January 21, 2021

Pursuit of in-memory computing has long been an active area with recent progress showing promise. Just how in-memory computing works, how close it is to practical application, and what are some of the key opportunities a Read more…

By John Russell

HiPEAC’s Vision for a New Cyber Era, a ‘Continuum of Computing’

January 21, 2021

Earlier this week (Jan. 19), HiPEAC — the European Network on High Performance and Embedded Architecture and Compilation — published the 8th edition of the HiPEAC Vision, detailing an increasingly interconnected computing landscape where complex tasks are carried out across multiple... Read more…

By Tiffany Trader

Supercomputers Assist Hunt for Mysterious Axion Particle

January 21, 2021

In the 1970s, scientists theorized the existence of axions: particles born in the hearts of stars that, when exposed to a magnetic field, become light particles, and which may even comprise dark matter. To date, however, Read more…

By Oliver Peckham

AWS Solution Channel

Fire Dynamics Simulation CFD workflow on AWS

Modeling fires is key for many industries, from the design of new buildings, defining evacuation procedures for trains, planes and ships, and even the spread of wildfires. Read more…

Intel® HPC + AI Pavilion

Intel Keynote Address

Intel is the foundation of HPC – from the workstation to the cloud to the backbone of the Top500. At SC20, Intel’s Trish Damkroger, VP and GM of high performance computing, addresses the audience to show how Intel and its partners are building the future of HPC today, through hardware and software technologies that accelerate the broad deployment of advanced HPC systems. Read more…

Researchers Train Fluid Dynamics Neural Networks on Supercomputers

January 21, 2021

Fluid dynamics simulations are critical for applications ranging from wind turbine design to aircraft optimization. Running these simulations through direct numerical simulations, however, is computationally costly. Many Read more…

By Oliver Peckham

Red Hat’s Disruption of CentOS Unleashes Storm of Dissent

January 22, 2021

Five weeks after angering much of the CentOS Linux developer community by unveiling controversial changes to the no-cost CentOS operating system, Red Hat has un Read more…

By Todd R. Weiss

HiPEAC Keynote: In-Memory Computing Steps Closer to Practical Reality

January 21, 2021

Pursuit of in-memory computing has long been an active area with recent progress showing promise. Just how in-memory computing works, how close it is to practic Read more…

By John Russell

HiPEAC’s Vision for a New Cyber Era, a ‘Continuum of Computing’

January 21, 2021

Earlier this week (Jan. 19), HiPEAC — the European Network on High Performance and Embedded Architecture and Compilation — published the 8th edition of the HiPEAC Vision, detailing an increasingly interconnected computing landscape where complex tasks are carried out across multiple... Read more…

By Tiffany Trader

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

President-elect Biden Taps Eric Lander and Deep Team on Science Policy

January 19, 2021

Last Friday U.S. President-elect Joe Biden named The Broad Institute founding director and president Eric Lander as his science advisor and as director of the Office of Science and Technology Policy. Lander, 63, is a mathematician by training and distinguished life sciences... Read more…

By John Russell

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Intel ‘Ice Lake’ Server Chips in Production, Set for Volume Ramp This Quarter

January 12, 2021

Intel Corp. used this week’s virtual CES 2021 event to reassert its dominance of the datacenter with the formal roll out of its next-generation server chip, the 10nm Xeon Scalable processor that targets AI and HPC workloads. The third-generation “Ice Lake” family... Read more…

By George Leopold

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Leading Solution Providers

Contributors

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This