Picking up a soda can may be a simple task for humans, but it is challenging for a robot, which has to locate the object, deduce its shape, determine the right amount of force to use, and grasp it without letting it slip. Most robots today rely on visual processing, which limits their capabilities. To perform more complex manipulation, robots need an exceptional sense of touch and the ability to process sensory information quickly and intelligently, according to researchers at the National University of Singapore, or NUS.
A team of computer scientists and materials engineers at NUS recently demonstrated a new approach to make robots smarter. They said last week that they have developed an artificial sensory system that mimics biological neural networks that can run on a power-efficient neuromorphic processor such as Intel’s Loihi chip. This novel system integrates artificial skin and vision sensors, enabling robots to draw accurate conclusions about the objects they are grasping based on the data captured by the vision and touch sensors in real time.
“The field of robotic manipulation has made great progress in recent years,” said Benjamin Tee, an assistant professor in Department of Materials Science and Engineering at NUS. “However, fusing both vision and tactile information to provide a highly precise response in milliseconds remains a technology challenge.”
“Our recent work combines our ultra-fast electronic skins and nervous systems with the latest innovations in vision sensing and AI for robots so that they can become smarter and more intuitive in physical interactions,” said Tee, who co-leads this project with Harold Soh, an assistant professor from the Department of Computer Science at the NUS School of Computing.
The findings of this cross-disciplinary work were presented at the Robotics: Science and Systems conference in July 2020.
Human-like robotic sense of touch
Enabling a human-like sense of touch in robotics could significantly improve current functionality and even lead to new uses, said the NUS team. For example, on the factory floor, robotic arms fitted with electronic skins could easily adapt to different items, using tactile sensing to identify and grip unfamiliar objects with the right amount of pressure to prevent slipping.
In the new robotic system, the NUS team applied an advanced artificial skin known as Asynchronous Coded Electronic Skin (ACES) developed by Tee and his team in 2019. This novel sensor detects touches more than 1,000 times faster than the human sensory nervous system. It can also identify the shape, texture and hardness of objects 10 times faster than the blink of an eye.
“Making an ultra-fast artificial skin sensor solves about half the puzzle of making robots smarter. They also need an artificial brain that can ultimately achieve perception and learning as another critical piece in the puzzle,” added Tee, who is also part of the NUS Institute for Health Innovation & Technology.
Neuromorphic technology at NUS
To advance robotic perception, the NUS team explored neuromorphic technology — an area of computing that emulates the neural structure and operation of the human brain — to process sensory data from the artificial skin. As Tee and Soh are members of the Intel Neuromorphic Research Community (INRC), it was a natural choice to use Intel‘s Loihi neuromorphic research chip for their new robotic system.
In their initial experiments, the NUS researchers fitted a robotic hand with the artificial skin, and used it to read braille, passing the tactile data to Loihi via the cloud to convert the micro bumps felt by the hand into a semantic meaning. Loihi achieved over 92% accuracy in classifying the Braille letters, while using 20 times less power than a normal microprocessor.
Soh’s team improved the robot’s perception capabilities by combining both vision and touch data in a spiking neural network. In their experiments, the researchers tasked a robot equipped with both artificial skin and vision sensors to classify various opaque containers containing differing amounts of liquid. They also tested the system’s ability to identify rotational slip, which is important for stable grasping.
In both tests, the spiking neural network that used both vision and touch data was able to classify objects and detect object slippage. The classification was 10% more accurate than a system that used only vision.
Moreover, using a technique developed by Soh’s team, the neural networks could classify the sensory data while it was being accumulated, unlike the conventional approach where data is classified after it has been fully gathered.
In addition, the researchers demonstrated the efficiency of neuromorphic technology: Loihi processed the sensory data 21% faster than a top performing graphics processing unit (GPU), while using more than 45 times less power.
“We’re excited by these results,” said Soh. “They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step towards building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations.”
Intel, NR2PO support NUS research
“This research from the National University of Singapore provides a compelling glimpse to the future of robotics where information is both sensed and processed in an event-driven manner combining multiple modalities,” said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “The work adds to a growing body of results showing that neuromorphic computing can deliver significant gains in latency and power consumption once the entire system is re-engineered in an event-based paradigm spanning sensors, data formats, algorithms, and hardware architecture,”
The research was supported by the National Robotics R&D Programme Office (NR2PO), an initiative intended to nurture the robotics ecosystem in Singapore through funding of research and development. Key considerations for NR2PO’s robotics investments include the potential for applications that benefit the public sector and that help differentiate the nation’s industry.
Moving forward, Tee and Soh plan to further develop their novel robotic system for applications in the logistics and food manufacturing industries where there is a high demand for robotic automation, especially moving forward in the post-COVID-19 era.
Aditya says
In my view, compared to used cases in Warehouse environment, it would be more effective to use this technology for prosthetic systems. Warehousing used cases are already being dominated by mobile and manipulator robots, where they don’t require such high-standard sensing for item identification. But maybe in the future where robots are performing the majority of the item picks, then definitely this technology will be revolutionary.