GRASP STABILITY PREDICTION SYSTEM USING TACTILE SENSORS

In the past six decades, we have come a long way in the world of autonomous machines. Science fiction had placed robots in our homes in the year 2000. Even though we do not all own humanoid robots to help us with our daily tasks, automated systems are necessary for our way of life. Nonetheless, these stories we heard as children have guided some of us to attempt replicating human abilities with robots (Boston Dynamics (2017), Honda (2017), Bebionic (2017), etc.). Amongs all the complex tasks we learn naturally as humans, one of the first ability we wished to replicate with robots was grasping. We have come a long way in the automatic grasping domain but, robots are still far from matching human capibilities when it comes to adapting their grasp when exposed to novel objects. We often see images of industrial robots accomplishing complex tasks in a very robust and efficient way but, with the lean supply chain movement (small volume and high mix productions), robots must now be able to adapt to a variety of different objects. There is a growing demand for adaptive systems in the manufacturing world that can be attributed to the consumer’s want for personalized products. This movement has not only affected small productions but also, high volume production lines. Robotic integrators and tooling experts have had to push their imaginations to transform the typical rigid assembly line into a flexible production line. We have seen the introduction of cobots in the manufacturing world and the notion of lean robotics who are ment to help integrators maintain a high flexibility and quick deployment of production lines. This transition towards new manufacturing techniques have even prompted industry giants, such as Google and Amazon, to invest time, money and effort into more flexible and efficient production lines.

For a robotic integrator, it is a common task to teach a robot how to grasp an object. Equipped with the proper end effector and sensors, a robot can be shown how to grasp and also assess the quality of its grasp on an object. But, replace the object and it can no longer grasp the new object properly. This is quickly becoming a problem for the modern flexible production lines. As researches, we wish to develop new kinds of tools and methods to allow easier and faster integration of flexible robotic cells. More specifically, our research is interested in developing new techniques in order for a robot to learn, not how to grasp an object, but how to determine if the grasp is stable. This interest came from analysing the grasp strategy used by humans. We turned our attention to the biomedical research of grasping. We noticed there is a planning phase (Feix et al. (2014a) and Feix et al. (2014b)), were our brain computes the necessary trajectory and grasping technique, greatly based on vision and our knowledged of the object to grasp. But, there is a whole second phase that starts when we come in contact with the object (Fu et al. (2013)), we react and adapt our grasp based on a whole new set of sensors directly located in our hands. We questioned ourselves on how a human evaluates the quality of the grasp as it is happening. In the case of biological intelligence, we know that we use a combination of different sensors, mixed with experience to assess the outcome of our actions. Johansson and Flanagan (2009b) have shown that an essential sense for grasp assessment is touch. Many researchers (Hyttinen et al. (2015), Huebner et al. (2008), Dang and Allen (2013)) have used tactile sensors for grasp planning and adjustment .

Overview of Relevant Sensors Used in Robotic Manipulation and Grasping Tasks

According to the Oxford English Dictionary the definition of robot is A machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer. These robots, just as humans, are very limited in the tasks they can execute if they do not have the proper tools. In order to perform more and more complex tasks, researchers and engineers have developed sensors to give more flexibility to robotic systems.

Computer Vision

In the case of an industrial robot working in an assembly line, repeating the same tasks with the same objects over and over, the sense of sight is not necessary. But, if the object is slightly different or presented to the robot in a different location, it will immediately fail. Vision systems have been in constant development and evolution in order to solve some of these problems.

Capturing images long preceeds robots but, it is only in the 1960s that we first started using images in combination with a computer. Indeed, image processing is typically a very demanding task. Nowadays, we have the ability to capture both 2D and 3D data with different sensors.

One could believe that 2D passive cameras are out of date but in fact, they are still very useful in the industry. As Cheng and Denman (2005) have demonstrated, using a 2D vision system can improve accuracy, flexibility and intelligence of a robotic system. Indeed, we have become very proficient at detecting edges, patterns and shapes in 2D imagery. Also, with the evolution of computer processors, these tasks are executed very rapidly, to the point where they can be used in real-time.

On the other hand, we live in a three dimensional world and, to develop more elaborate systems we need the information from the third dimension. The output of 3D vision systems is in the form of a point cloud. There are many different technologies to retrieve this type of information but, they can be seperated into two main categories. First, stereo vision uses two passive cameras, the depth perception is inferred by computing a disparity field at each pixel location. The Bumblebee camera by FireWire is a popular choice. The second category is the active range which can be seperated into two sub categories: projected light and time of light technologies. Projected light, as its name suggests, projects a pattern (visible or not) and uses triangulation to compute depth. The most common version of this technology is found in the Microsoft Kinect . Time of light uses our knowledge of the speed of light. Again, light is projected onto the space we want to see but, this time, depth is computed by calculation the time delay between the emission and detection of the light source. The Lidar systems are based on this technology. Each of these cameras have their strong points and their weaknesses.

Here are a few examples of researchers who have used 3D image processing in an automated system. Viet et al. (2013) have used a Bumblebee stereo camera in their algorithm to control an electric wheelchair for severely disabled people. The goal of their research was to avoid object during the movement towards a target position in highly clustered areas. Padgett and Browne (2017) have also worked on obstacle avoidance but using the Lidar technology. Furthermore, researchers such as Fan et al. (2014) proposed a computer vision system using both a passive 2D camera coupled to a 3D sensor for depth perception. Their system was built to efficiently determine the position of randomly placed objects for robotic manipulation.

Force Sensing

Vision systems give the robots the ability to know their surroundings but do not give the robot direct feedback of its effect in their workspace. The second sensors we wish to review are the force-torque sensors which can be used to many effects. Again, there are many different technologies to obtain what the end effector of a robot feels.

A common use of force torque sensors is in applications where a robot must keep a constant pressure on a workpiece. Mills (1989) propose a complete dynamic model to be used for such tasks. Another interesting application for force torque sensors is the handling of a robot by hand. Typically, during the development of a robotic program, the integrator must use a keypad or a joystick to move the robot to desired positions in order to teach them. Loske and Biesenbach (2014) propose a solution to hand-drive an industrial robot using an added force-torque sensor and companies like Universal Robot, who build collaborative robots, have integrated such technology directly into their controllers by using force feedback from the individual joint of their robotic arms.

In the case of our research, we are more interested in monitoring what the robot feels in a grasping operation. Some authors, such as Hebert et al. (2011), propose a method to fuse data from a vision system, force-torque sensor and gripper finger position to evaluate the position of an object within the robotic hand while other, such as Moreira et al. (2016), propose a complex architechture to assess the outcome of a grasping operation. These systems have a common point where the robot must actually pick the object to evaluate the grasp.

Force-torque sensors can give us valuable information on the grasped object. We can image a system that would detect if an object has been dropped by reading the variations of weight from the sensor but, it is much harder to detect the object slipping in the gripper.

Table des matières

INTRODUCTION
CHAPTER 1 LITERATURE REVIEW
1.1 Overview of Relevant Sensors Used in Robotic Manipulation and Grasping Tasks
1.1.1 Computer Vision
1.1.2 Force Sensing
1.1.3 Tactile Sensors
1.2 Examples of Existing Intelligent Grasping Robots
1.2.1 Classical Examples of Robots Picking and Moving Products on an Assembly Line
1.2.2 Modern Intelligent Manipulating Robots
1.3 Machine Learning in Robotic Grasping Strategies
CHAPTER 2 AUTOMATED PICKING STATION
2.1 Experimental setup
2.1.1 Choosing the material
2.1.2 Installation specifications
2.2 Programming environment and architecture
2.3 Vision system
2.3.1 ROS package and C++ libraries
2.3.2 Object detection algorithm
2.4 Grasp strategy
CHAPTER 3 GRASP STABILITY PREDICTION SYSTEM USING TACTILE SENSORS
3.1 Proposed approach
3.1.1 Data collection
3.1.2 Data Auto-Encoding
3.1.3 Optimisation process
3.2 Experimentation
3.2.1 Experimental results
3.2.2 Sparse coding analysis
3.2.3 The classifier’s performance analysis
CHAPTER 4 AUTOMATED LABELLING SYSTEM
4.1 Defining the labels
4.2 Labelling algorithm
4.3 Evaluating our automated labelling system
4.4 Possible improvements on the automatic labelling system
CHAPTER 5 EVOLUTION OF OUR GRASP STABILITY PREDICTION SYSTEM USING INTEGRATED IMUS
5.1 Testing our old system with the new data
5.1.1 Validation of our tactile classifier
5.1.2 Training new classifiers with old metaparameters
5.1.2.1 New SVM with old dictionary
5.1.2.2 New dictionary and SVM
5.2 Running the optimization process with the new data
5.3 Integration techniques of the new data in our system
5.3.1 Defining the IMU data
5.3.2 Testing our systems on the same base
5.3.3 Using tactile and IMU data to build classifiers
5.3.3.1 Blending the data using a handmade classifier
5.3.3.2 Constructing a multilayer SVM system
5.4 Future work on data fusion
CONCLUSION

Cours gratuitTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *