Collaborative robots (cobots) are designed to operate around humans in a factory environment and to execute tasks in a human-like fashion. Interestingly, assembly tasks that are simple for humans to perform such as gripping, grasping, turning and rotating items require highly sophisticated reinforcement learning (RL) protocols and tactile sensors for cobots to repeatedly successfully perform. Visual programming routines in the onboard software enable operators to program a robot via a “train by demonstration” graphical user interface. However, the technology behind these easily programmed and controlled robots is complex.
Conventional robots are able to reliably grasp objects due to the availability of high-precision sensing from fixed cameras with known poses relative to the robot. However, the key benefit of using cobots is that they are, by design, easier to set up and program with the result that sometimes the only vision system available to localize objects is a lower-resolution camera mounted on the robot itself.
To close the resulting perceptual gap, tactile sensors have been developed by several research teams, and research is underway on methods to use them for tactile feedback and object classification. This is especially important for tasks such as insertion (e.g., a shaft into a bushing) or for handling soft or delicate objects such as food.
The reason tactile sensing is needed for insertion is that – once the robot picks it up – the object being manipulated may change pose, e.g., due to an unsuccessful insertion attempt. It is thus critical for successful insertion to adapt to object pose changes and tactile feedback can provide the necessary information to detect such “in-hand” object pose changes. For soft or delicate items, a critical grasp is needed, that being a grasp that is tight enough to hold the object without slipping, but not too tight in order to prevent any damage to the item.
Training with model-free RL
Recently, several research papers related to tactile-based control have been published by research groups at the Mitsubishi Electric Research Lab (MERL, Cambridge, MA). A paper entitled “Tactile-RL for Insertion: Generalization to Objects of Unknown Geometry” was submitted by MERL and MIT researchers to the IEEE International Conference on Robotics and Automation (ICRA) in which reinforcement learning was used to enable a robot arm equipped with a parallel jaw gripper having tactile sensing arrays on both fingers to insert differently shaped novel objects into a corresponding hole with an overall average success rate of 85% with 3-4 tries.
In this work, the change in tactile signals over time is processed into a tactile flow signal representing the change of contact between the grasped object and the gripper during insertion. Subtle changes in the rotation of the objects in the gripper fingers caused by collisions with the target are detected by the tactile sensing arrays and can be used to provide real-time feedback to correct the object’s position and pose. Using tactile signals as input, a controller was trained with deep, model-free RL to learn a tactile-based feedback insertion policy. In this manner, the controller learned from its “mistakes” and increased its effectiveness.
To further develop the learning regimen of the controller, training was performed on pegs of different shapes (i.e., round and square) in a curriculum-learning fashion, starting with relatively simple tasks and learning increasingly more challenging variations. This led to a significant reduction in the number of training iterations. The system was first trained to place the pegs against a flat wall with one constraint, next a corner with two constraints, followed by a U with three constraints, and finally inserting into a square hole.
The RL process was successful enabling the resulting controller to generalize to novel real-world objects, with similar shapes as the training pegs, such as a small and big bottle, a phone charger and a paper box.
Classifying objects by ‘feel’
The controller learning worked well on objects with a firm or hard surface. But what about objects with a thin or malleable surface that could be damaged by mishandling by a robot arm, such as a peach or cellophane-wrapped package?
To address that problem, MERL researchers wrote a second paper that also leverages the patterns obtained by tactile sensing for robotics. The paper, entitled “Interactive Tactile Perception for Classification of Novel Object Instances,” describes research in which signals from tactile arrays developed at MERL and attached to the fingers of a gripper mounted to a robot arm can be used to classify novel object instances based only on tactile “feel.”
The set of test objects used in the experiment included such household objects as an apple, a stuffed animal, paper cup, toy football, tennis ball and a wine glass. Objects are localized in the workspace using depth sensing after which the robot executed a sequence of palpations of the object using proposals from a grasp pose detector.
Related: OmniTact uses micro-cameras for multi-directional tactile sensing
The novel tactile arrays consist of temperature-compensated, super-miniaturized pressure sensor cells laid in a planar grid and attached to the finger pads of our robotic gripper. The signals from the tactile array are converted into a 3D surface representation that is similar to human tactile perception. This approach conveniently and elegantly represents the tactile information as an implicit subset of the object’s geometry, precisely localized in the robot’s workspace. In essence, the finger pads engage with an object like a human hand and – through controller manipulation and learning – begin to understand the qualities of the object to properly handle it.
The tactile data from the sequence of palpitations was encoded in a tactile feature space using the Viewpoint Feature Histogram (VFH) that encodes both the geometry and the viewpoint of a tactile point cloud. Each palpation’s VFH feature was processed using a One-Class Support Vector Machine (OC-SVM) to determine whether the object was similar or different to a previously tested object. If the object is novel, its tactile model consisting of all its VFH features is stored and used for future object detection tasks. An advantage of this approach is the data efficiency due to the omission of the need to pre-train the system on a set of known objects. It therefore allows this approach to easily expand to incorporate novel objects. Although the tactile information encodes geometry information, the OC-SMV is agnostic to object scale. The results show good object separation for object pairs having significantly different tactile feel such as geometry and hardness, with the accuracy slightly improving as the number and distribution of palpitations increases.
Enhancing manipulation skills
Although formulated as exploratory research, our results show that using pressure-based tactile sensing can successfully be employed for novel object detection by performing few palpitations and without the need for pre-training.
Enabling object classification and real-time feedback control based on tactile sensing enhances the manipulation capabilities of today’s robots and will be especially important for cobots that are being designed to operate with more human-like capabilities. Looking forward, we expect tactile sensing to play a key role in achieving higher levels of dexterous manipulation in contact-rich settings including tactile serving, in-grip pose detection and controlled slipping.
Related: Tactile Telerobot brings human-like dexterity to robots
While these applications mostly reside on the factory floor at this stage, the learnings derived in this setting will be critical for the effective use of home-based robots to perform routine tasks such as putting away the groceries, meal preparation and cooking, and just maybe someday changing a baby’s diapers.
About the Authors
Alan Sullivan, Ph.D., is a Computer Vision Group Manager, Diego Romeres is a Principal Research Scientist, and Radu Corcodel is a Research Scientist with Mitsubishi Electric Research Laboratories (MERL), the US subsidiary of the corporate research and development organization of Mitsubishi Electric Corporation. MERL conducts application-motivated basic research and advanced development in: Physical Modeling & Simulation, Signal Processing, Control, Optimization, and Artificial Intelligence.
Tell Us What You Think!