Towards Tactile Intelligence for Improving Advanced Robotic Manipulation

Over the past decade, developments in the global economy have driven major changes in the way industries produce their goods. Today, there is increasingly varied demand from consumers, requiring robots and other manufacturing systems to deal with a larger variety of products, which also tend to be produced in smaller quantities than before. Consequently, instead of performing a small set of tasks in a repetitive manner as they tended to do in the past, today’s robots are rather asked to execute increasingly complex and varied operations. Therefore, in order to adapt today’s robots to modern manufacturing environments, current perception technologies and control algorithms must evolve significantly. In this perspective, tactile perception and dexterity have been identified as critical elements for the future of robotics (Georgia Institute of Technology, 2013; University of California San Diego, 2016). They are currently a bottleneck that hinders robots’ ability to manipulate and interact with their environment (University of California San Diego, 2016), and advances are necessary to meet current and future automation needs.

Robots in the agile manufacturing paradigm

Different factors explain why consumers demand is changing more quickly today than it was fifty years ago. One factor is the globalization of markets, which provides consumers easy access to a wide variety of products from around the world. This allows consumers to discover new products more easily. The ease and speed with which we can now communicate information from a source to a destination located almost anywhere around the world have also contributed to this phenomenon.

In this agile manufacturing ideology, traditional automation is not suited to smaller quantitybut-higher-variety needs, since changes to products or processes generally require costly upgrades and significant downtime. Instead, the ideal manufacturing systems is flexible enough to accommodate a larger number of configurations. For robots to comply with this new context where mass customization is key, both their perception and control algorithms must be improved (University of California San Diego, 2016). Besides, the recent growth in adoption of collaborative robots in factories adds its share of challenges too, since a growing number of robots operate in the same workspace as humans. During the past few decades, research in robotic perception has mostly been focused on providing robots with accurate and reliable artificial vision. Progress in this field has successfully enabled robots to automate a large number of industrial processes. Yet despite significant improvements in artificial vision, many tasks are still difficult or impossible to automate with the current methods of perception employed by industry .

Dexterous manipulation: bridging the gap between humans and robots

The sense of touch helps humans achieve many tasks. Indeed, the somatosensory system (i.e.: touch), with its four main types of mechanoreceptors, gives us sensations that are essential to the execution of a large number of tasks that we often taken for granted. However, people who suffer from hypoesthesia (a reduced sense of touch) have a very hard time doing their everyday activities (Johansson & Flanagan, 2009), highlighting how important this sense really is. These same tasks that are generally easy for humans are frequently difficult to achieve from a robotics perspective. For example, the simple act of lacing a pair of shoes, taking an egg out of its box, or folding a stack of laundry is still quite challenging for robots.

On assembly lines, this gap between robotic and human dexterity severely limits the progression of automation in industries. One of the major limitations affecting industry is the difficulty these robots have in carrying out tasks that require a high level of dexterity, such as assemblies that are highly constrained in force and torque, and particular insertion tasks (Roberge & Duchaine, 2017). This explains why the technical challenge at the 2018 World Robot Summit (WRS) was a complex assembly task that included several insertion steps and aimed at assembling a complete belt drive unit (WRS, 2018): a relatively easy task for human workers, but a very difficult one for robots. Even the simple task of connecting a USB key to a computer port often requires advanced manipulation algorithms to be put in place (Li et al., 2014), whereas humans are able to do this blindfolded using solely their sense of touch.

E-commerce giants also suffer from this lack of advanced dexterous manipulation skills in robots, which limits the automation level in their warehouses. A concrete example of this limitation can be found, for example, in the material handling chains of e-commerce companies such as Amazon, Alibaba, Ebay and many others. These companies have to deal with a wide variety of items and even today, the automation of the packaging step, as well as the handling of items purchased by customers, still remain problematic. These actions are therefore still carried out manually to a large extent, although robotics is already widely spread inside warehouses (Ackerman, 2019; Ibbotson, 2018). For example, Walmart uses Alphabot robots (Lert, 2020) inside highly automated warehouses to move items purchased by customers, but grabbing the items, aggregating them into orders and placing them into boxes are still performed manually by employees (Ibbotson, 2018). One particular problem with having robots do this task is to implement a control strategy that allows the stable grasping and handling of a particular object without damaging it. Multiple difficulties have prompted a significant proportion of these companies to pay large sums of money to finance research dedicated to the development of technological solutions adapted to these problems. One of the best-known examples in the field is Amazon, which has catalyzed advanced and dexterous manipulation research through the popular Amazon Picking Challenge (Lawrence, 2020). Clearly, industry is motivated to solve the problems brought about by robots’ lack of sense of touch.

Tactile sensing transduction techniques

There are many ways to build a tactile sensor, but they all share one point: they try to replicate, at different levels, at least one function of a biological tactile sensory system. In humans, functions of our biological tactile sensory system are carried out by four distinct types of mechanoreceptors: Merkel’s disks, Meissner’s corpuscles, Pacinian corpuscles, and Ruffini endings. What we consider our sense of touch is the combined action of these mechanoreceptors, which are individually responsible for a distinct perceptual function (Johnson, 2001). Together, they allow humans to feel light pressure, deep pressure, vibrations, shear, torsion and temperature (Iwamura, 2009). This section presents different transduction technologies that are commonly mentioned in the literature and used to build sensors that can reproduce some of these sensing capabilities. Some of the most famous sensor implementations for each transduction method will be presented and discussed at the same time.

Optical and vision-based sensors

ptical tactile sensors generally consist of light-emitting diodes (LEDs)—a transduction medium which also often acts as the contact interface—and at least one photodetector, such as a charged coupled device-based (CCD-based) camera or a photodiode. Depending on the implementation, a geometrical change in the transduction medium will change the way light is transmitted, for example by modulating the reflection intensity, altering the received light spectrum, or changing its wavelength, phase, or polarization (Xie et al., 2013). Some optical tactile sensors are based on frustrated total internal reflection (FTIR) (Lavatelli et al., 2019). Another related technique is based on the visual tracking of some known features embedded in the sensor’s material while external forces and moments are applied . These sensors have some advantages over sensors based on other transduction technologies: their spatial resolution is generally high, and they are unaffected by electromagnetic noise. However, optical tactile sensors tend to be bulky, to consume a significant amount of power, and to require more computing power to process their data (Kappassov et al., 2015).

Early examples of optical tactile sensor implementations were suggested by Begej (1988). The principle was relatively straightforward and consisted of using a camera to measure the frustration of the total internal reflection (TIR) happening on an elastomer interface. While this specific sensing method is relatively simple to achieve, it still requires a bulky implementation and can mostly only be used to localize and to quantify contact area.

Today, many researchers are choosing vision-based tactile sensors. A good example of such sensors are those from the TacTip family (Ward-Cherrier et al., 2018), where a camera is used to track a group of pins embedded within a silicon material with skin-like smoothness. These specific sensors are able to accurately localize contact points with an average error ranging from 0.16−0.20 mm. However, the fact that these sensors measure from 85 to 161 mm (from the base to the sensing pad) could make it difficult to integrate them into a robotic gripper, because they could severely limit the stroke of most grippers. Another well-known example of a vision-based sensor is the GelSight tactile sensor, which is one of the tactile sensors with the highest spatial resolution (Johnson & Adelson, 2009). It has even been shown that this sensor can detect the difference in height caused by the ink on the surface of a twenty-dollar bill. Although this sensor has been integrated to a robotic gripper (Li et al., 2014), further concerns about its size have pushed researchers to develop a revised and more compact version called GelSlim (Donlon et al., 2018). This version uses a mirror to change the camera’s field of view, which enabled significant reductions to the sensor’s thickness. However, the sensor must now be embedded into a whole finger instead of just a fingertip. Both GelSight and GelSlim rely on three types of illumination (red, blue, and green) at three different locations and use photometric stereo to convert the images to 2.5D data, which requires more computing power than most transduction technology implementations.

Table des matières

INTRODUCTION
0.1 Motivation
0.1.1 Robots in the agile manufacturing paradigm
0.1.2 Dexterous manipulation: bridging the gap between humans and robots
0.2 Research problems
0.2.1 Leveraging tactile inputs to improve robotic manipulation
0.2.2 Unravelling tactile sensing modalities
0.3 Objectives
0.4 Contributions
0.5 Thesis organization and outline
CHAPTER 1 LITERATURE REVIEW
1.1 Tactile sensing transduction techniques
1.1.1 Optical and vision-based sensors
1.1.2 Capacitive sensors
1.1.3 Piezoresistive sensors and strain gauges
1.1.4 Piezoelectric sensors
1.1.5 Magnetic sensors
1.1.6 Barometric sensors
1.1.7 Others
1.2 Tactile data encoding
1.2.1 Principal component analysis (PCA)
1.2.2 Independent component analysis (ICA)
1.2.3 Local linear embedding (LLE)
1.2.4 K-means clustering
1.2.5 Spectral clustering
1.2.6 Sparse coding
1.2.7 Other encoding techniques
1.3 Tactile sensing applications in robotics
1.3.1 Slippage detection and other dynamic event detection
1.3.2 Object recognition, classification and grasping
1.3.3 Learning how to grasp and handle
CHAPTER 2 IMPROVING INDUSTRIAL GRIPPERS WITH ADHESIONCONTROLLED FRICTION
2.1 Abstract
2.2 Introduction
2.3 Related work
2.4 Theory
2.4.1 Friction with adhesion
2.4.2 Moment compensation in manipulation tasks
2.5 Fingertip design and construction
2.6 Experiments
2.6.1 Friction comparison with flat and textured silicone rubber
2.6.1.1 Experiment
2.6.1.2 Results
2.6.2 Tangential force and area versus normal force
2.6.2.1 Experiment
2.6.2.2 Results
2.6.3 Effect of tangential force direction
2.6.3.1 Experiments
2.6.3.2 Results
2.6.4 Robotic grasping experiment
2.6.4.1 Setup
2.6.4.2 Results
2.7 Discussion: predicting maximum shear stress in practical settings
2.8 Conclusions and future work
2.8.1 Conclusions
2.8.2 Future work
2.9 Acknowledgments
CHAPTER 3 IDENTIFYING IMPORTANT ROBOTIC EVENTS USING SPARSE TACTILE DATA REPRESENTATIONS
3.1 Abstract
3.2 Introduction
3.3 Proposed approach
3.3.1 Pre-processing algorithms
3.3.2 Sparse dynamic data encodings
3.4 Experiments
3.4.1 Setup description
3.4.2 Data collection
3.5 Results and analysis
3.5.1 Analysis of the hyperparameters’ effects performance
3.5.1.1 Effect of 𝛽
3.5.1.2 Effect of the number of basis (𝑁𝐵𝑎𝑠𝑖𝑠)
3.5.1.3 Effect of the frequency resolution
3.5.1.4 Effect of the Hamming window size
3.5.1.5 Effect of sparsity
3.5.2 Analysis of dictionary elements usage per class
3.5.3 Results
3.5.3.1 Pair-wise results
3.5.3.2 Performance in different classification scenarios
3.5.4 Generalization analysis
3.6 Conclusion
CHAPTER 4 TACTILE-BASED OBJECT RECOGNITION USING A GRASPCENTRIC EXPLORATION
4.1 Abstract
4.2 Introduction
4.3 Related work
4.3.1 Tactile sensing in robotics
4.3.2 Using tactile sensors for object identification
4.4 The approach
4.4.1 Experimental setup
4.4.2 The tactile exploration phase
4.4.3 The machine learning agents
4.4.3.1 The gripper opening position
4.4.3.2 The dynamic data
4.4.3.3 The perceived tactile deformation at the fingertips
4.5 Experimental results and analyzes
4.5.1 The contribution of each modality
4.5.2 Sources of confusion
4.5.3 Never-seen objects and property inference
4.6 Conclusion
CONCLUSION

Cours gratuitTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *