Fusion d’informations multi-capteurs pour la commande du robot humanoïde NAO

Nowadays, robotics acts a very important role in our industrial life and it’s likely that they are our future. Robotics appears in many domains from laboratories to real applications in healthcare, army, environment, and entertainment. For that reason, it receives a big attention from many researches, and our work is one of them.

As a matter of fact, a robot is normally equipped by several sensors allowing receiving information from the external world. The quality of information not only depends on the quality of sensors but also the exploited environment. Sometimes, these factors affect to the performance of the robot when causing uncertainties and imprecisions, and lead to severe consequences. For example, a sub-marine robot fails to detect an underwater obstacle (e.g. a big fish) due to a low quality of its sonar or external noises (e.g. from the enemy), a dangerous contact might be happened. That’s why the decision should be certain and precise.

Actually, there are many solutions to overcome the problems of uncertainties and imprecisions, e.g . However, the most popular way is to integrate more than one sensor for the same task, it’s so called multi-sensor decision system, and this is applied in many modern robots which operate critical operations. The main purpose of adding extra sensors is to increase certainties and reduce as much as possible imprecise decisions. Theoretically, having more sensors means having additional information, and the improvement should work, and we just need a good method to combine the information from these sensors.

In fact, data fusion in robotics has been experimentally validated in many researches with different applied domains. For example the work in [21] proposes a novel approach for the simultaneous localization and map building (SLAM) by considering a multi-sensor system composed of sonar and a CDD camera. The Hough transformation is used to extracts geometrical features from raw sonar data and vision image, then the Extended Kalman Filter (EKF) is employed to fuse the information at the level of features. On the other hand, [19] considers the indoor navigating scenario in which mobile robots loose reliability when moving at high speed. They combine a wireless sensor network with a passive RFID, and the fusion allows the robot to perform more precise navigation and avoid static obstacles. [11] deals with uncertainty and imprecision treatment during the localizing process of a mobile robot equipped with an exteroceptive sensor and odometers. The imprecise data given by the two sensors are fused by constraint propagation on intervals, based on the use of the Transferable Belief Model of Smets. In [10], a multi-sensor data fusion algorithm is used for a six-legged walking robot DLR Crawler to estimate its current pose with respect to the starting position. The algorithm is based on an indirect feedback information filter that fuses measurements from an inertial measurement unit (IMU) with relative 3D leg odometry measurements and relative 3D visual odometry measurements from a stereo camera. In [78], multi-sensor fusion scheme is used for human activity recognition. Data from two wearable inertial sensors attached on one foot are fused for coarse-grained classification to determine the type of activity. Then, a fine-grained classification module based on heuristic discrimination or hidden Markov model is applied to further distinguish the activities. The work presented in [8] describes a flexible multi-modal interface based on speech and gestures modalities in order to control a mobile robot. A probabilistic and multi hypothesis interpreter framework is used to fuse results from speech and gesture components. More examples about the application of sensor fusion in robotics can also be found in .

Influenced from many researches in robotics and fusion in literature, we opened a project in this domain taking a humanoid robot as the platform for the validation. The robot’s name is NAO and it was developed by the Aldebarans company. It has a small size with a height of 55 cm, however, having 25 degrees of freedom allows it to do many complex tasks and mimic human behaviors. Notably, it is equipped with several sensors to receive information from external world: two HD cameras, four microphones, a sender and a receiver for ultrasonics, two tactile sensors for hands and one for the head, two bumpers at the feet, one inertial unit, as well as 24 joints sensors.

In this thesis, we consider the cases where the NAO robot recognizes colors and objects using its sensors. The microphones are used to recognize commands from human, the sonar sensors are employed to avoid obstacles during its displacement, and especially, a camera on its head is used to detect and recognize targets.

Actually, there exist many approaches for the color and object recognition in the literature, however, during the robot’s operation, uncertainties and imprecisions are unavoidable. These may come from the quality of sensors, or from the exploited environment such as lighting conditions, occlusion, or from the confuse among possible choices e.g. some colors/objects are too similar. We study the effect of these uncertainties and imprecisions on the decision-making ability of the NAO robot, and propose a multi-camera system to improve the reliability of the robot’s decision. We have explored the performance with both types of fusion: homogeneous and heterogeneous sensors (cameras).

As discussed above, during the operation of a robot, we cannot demand an ideal working condition because there are always uncertainties and imprecisions. For critical tasks, the fusion of multi-sensor becomes more and more important. This research brings an interesting view on how a humanoid robot finds difficult in its tasks of recognition and how we can improve the faculty of perception of a humanoid robot. Additionally, according to our bibliography, there is no other works, which consider using multi-camera data fusion for the color/object recognition of the NAO robot. For that reason, we expect that this work is going to be a good reference for other researches of the same domain.

Table des matières

1 Introduction
1.1 Context
1.2 Contribution of This Research
1.3 Hypothesis of the Research
1.4 Outline of the Methodology
1.5 Structure of the Thesis
2 Sugeno Fuzzy System for the Color Detection
2.1 The NAO Robot in the Context of Color Detection
2.1.1 The NAO Robot
2.1.2 The Color Detection
2.2 Consideration of Color Spaces
2.2.1 RGB
2.2.2 CIE-L*a*b
2.2.3 HSV
2.2.4 The Choice of Color Spaces
2.3 Methods for the Color Detection
2.3.1 Neural Network based Methods
2.3.2 Genetic Algorithm based Methods
2.3.3 Fuzzy System based Methods
2.4 The Sugeno Fuzzy System in the Color Detection
2.4.1 Overall Process
2.4.2 Membership Functions
2.4.3 Sugeno Inference for Ouput Colors
2.4.4 A Practical Consideration
2.5 The Reliability of the Proposed Detection Method
2.5.1 The Influence of the Detection Threshold
2.5.2 The Influence of Uncertainties and Imprecisions
2.5.3 The Quantification of Reliability of the Detection System
2.6 Experimental Study
2.7 About the Improvement of the Performance for the Detection System
2.8 Conclusion of Chapter
3 Fusion of Homogeneous Sensors Data
3.1 The Color Detection Using Multiple Homogeneous Sources
3.2 Background of the Dempster-Shafer Theory
3.3 The Methodology for the Color Detection Using Multiple Homogeneous
Data Sources
3.3.1 The Method’s Principle
Overview of the Process
Constructing Mass Values
Combination and Decision
Discounting Factor and the Reliability of Sources
3.3.2 Illustrative Example
Example 1: Conflict between two cameras
Example 2: Conflict among three cameras
3.3.3 On the Choice for the Number of Sources
3.4 Application and Validation
3.4.1 The Context of Application
3.4.2 Validation of the Detection and Discussion
Fusion of Two Cameras
Fusion of Three Cameras
3.5 Conclusion
4 Conclusion

Cours gratuitTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *