Reconnaissance d’actions humaines dans des séquences vidéo RGB-D monoculaires

Reconnaissance d’actions humaines dans des séquences vidéo RGB-D monoculaires

 Generative Adversarial Networks (GANs)

 In recent years, Generative Adversarial Networks  have gained a lot of popularity in the field of computer vision. GAN-based approaches have been used and shown great results in image synthesis (Reed et al., 2016), image super-resolution (Ledig et al., 2017), image-to-image translation (Isola et al., 2017) and so on. In this section, we briefly review the mathematical model behind a GAN framework and its training procedure. A GAN model consists of two components (see FIGURE 2.71 ): a generator G and a discriminator D. Given an input noise vector z, which is sampled from a normal distribution pz(z), the generator G is trained to generate an image x in order to ensure that x is indistinguishable from a real data distribution pdata(x). While training G, we maximize the probability so that x belongs to the given distribution pdata(x). The generated image x is fed into the discriminator D alongside a stream of images taken from the real distribution. In other words, D is trained to estimate the probability of a given sample coming from the real distribution. To this end, we need to make sure that the decisions of the discriminator D over real data are accurate by maximizing Ex∼pdata(x) [log D(x)]. Meanwhile, given a fake sample G(z), z ∼ pz(z), the discriminator is expected to output a probability, D(G(z)), close to zero by maximizing Ez∼pz (z) [log(1 − D(G(z)))]. On the other hand, the generator is trained to increase the chances of D producing a high probability for a fake example, thus to minimize Ez∼pz (z) [log(1 − D(G(z)))]. When combining both aspects together, D and G are playing a minimax game, in which we should optimize the following loss function L(D, G) min G max D L(D, G) = min G max D (Ex∼pdata(x) [log D(x)] +Ez∼pz (z) [log(1 − D(G(z)))]). (2.13) In practice, both components G and D are two neural networks. The loss function L(D, G) from equation (2.13) can be optimized using gradient-based methods since both G and D are differentiable with respect to their inputs and parameters. In 2016, Radford, Metz, and Chintala, 2015 introduced a set of architectures called Deep Convolutional GANs (DCGANs) in order to train GANs in a better way. This study showed that GANs can learn good representations of images for supervised learning and generative modeling. In Chapter 3, we will examine the potentials of GANs in analyzing actions in videos.

Related reviews and public datasets

Previous reviews 

We first consider related earlier reviews in video-based human action recognition. Looking at the major conferences and journals in computer vision and image processing, several earlier surveys have been published (Aggarwal and Cai, 1999; Moeslund and Granum, 2001; Wang, Hu, and Tan, 2003; Moeslund, Hilton, and Krüger, 2006; Turaga et al., 2008). For instance, Aggarwal and Cai, 1999 reviewed methods for human motion analysis, focusing on three major areas: motion analysis, tracking a moving human from a single view or multiple cameras and recognizing human actions from image sequences. Moeslund and Granum, 2001 reviewed approaches on human motion capture. They considered a general structure for motion analysis systems as a hierarchical process with four steps: initialization, tracking, pose estimation, and recognition and then reviewed the papers based on this taxonomy. Wang, Hu, and Tan, 2003 presented an overview on human motion analysis, in which motion analysis was illustrated as a three-level process including human detection, human tracking, and behavior understanding. Moeslund, Hilton, and Krüger, 2006 described the work in human capture and analysis, centered on initialization of human motion, tracking, pose estimation, and recognition. Turaga et al., 2008 reviewed the major approaches for recognizing human actions and activities. They considered “actions » as characterized by simple motion patterns, typically executed by a single person. Meanwhile, “activities » are more complex and involve coordinated actions among a small number of humans. Many reviews on human action recognition approaches have been made since 2010 (e.g. Poppe, 2010; Weinland, Ronfard, and Boyer, 2011; Popoola and Wang, 2012; Ke et al., 2013; Aggarwal and Xia, 2014; Guo and Lai, 2014). For instance, Poppe, 2010 focused on image representation and action classification methods. A similar survey by Weinland, Ronfard, and Boyer, 2011 also concentrated on approaches for action representation and classification. Popoola and Wang, 2012 presented a survey focusing on contextual abnormal human behavior detection for surveillance applications. Ke et al., 2013 reviewed human action recognition methods for both static and moving cameras, covering many problems such as feature extraction, representation techniques, action detection and classification. Aggarwal and Xia, 2014 introduced a review of human action recognition based on 3D data, especially using RGB and depth information acquired by 3D sensors. Meanwhile Guo and Lai, 2014 provided a review of existing approaches on still image-based action recognition. Recently, Cheng et al., 2015 reviewed approaches on human action recognition in which all methodologies are classified into two categories: single-layered approaches and hierarchical approaches. Vrigkas, Nikou, and Kakadiaris, 2015 categorized human action recognition methods into two main categories including “unimodal » and “multimodal ». The authors then reviewed action classification methods for each of these two categories. The work of Subetha and Chitrakala, 2016 mainly focused on human action recognition and human-object interaction methods. Presti and La Cascia, 2016 provided a review of human action recognition based on 3D skeletons, summarizing the main technologies, including both hardware and software for solving the problem of action classification inferred from skeletal data. Recently, another review by Kang and Wildes, 2016 summarized various action recognition and detection algorithms, focused on encoding and classifying motion features. 

Benchmark datasets for human action recognition in videos

 With the increase in the study of human action recognition methods, many benchmark datasets have been recorded and published. Much progress in human action recognition has been demonstrated on these standard benchmark datasets. They allow researchers to develop, evaluate and compare new approaches for the problem of human action recognition in videos. In this section, we summarize the most important benchmark datasets, from the early datasets that contain simple actions and acquired under controlled environments, e.g. KTH (Schuldt, Laptev, and Caputo, 2004), Weizmann (Gorelick et al., 2007) or IXMAS (Weinland, Ronfard, and Boyer, 2006), to recent benchmark datasets with millions of video samples providing complex actions and human behaviors from the real world scenarios, e.g. Sports-1M (Karpathy et al., 2014) and NTU-RGB+D (Shahroudy et al., 2016). TABLE 3.2 shows the datasets and their main descriptions. 

Table des matières

Abstract
Acknowledgements
1 Introduction
1.1 Human action recognition in videos
1.2 Motivation
1.3 Research challenges
1.4 Problem statement and scope of study
1.5 Main contributions
1.6 Structure of the thesis
2 Overview of Deep Learning
2.1 Deep Learning: A summary
2.2 Convolutional Neural Networks (CNNs)
2.3 Recurrent Neural Networks with Long Short-Term Memory units (RNN-LSTM)
2.4 Deep Belief Networks (DBNs)
2.5 Stacked Denoising Autoencoders (SDAs)
2.6 Generative Adversarial Networks (GANs)
2.7 Conclusion
3 Deep Learning for Human Action Recognition: State-of-the-Art
3.1 Related reviews and public datasets
3.1.1 Previous reviews
3.1.2 Benchmark datasets for human action recognition in videos
3.2 Deep learning approaches for video-based human action recognition
3.2.1 Deep learning for human action recognition: Challenges
3.2.2 Human action recognition based on CNNs
3.2.3 Human action recognition based on RNNs
3.2.4 Fusion of CNNs with LSTM units for human action recognition
3.2.5 Human action recognition based on DBNs
3.2.6 Human action recognition based on SDAs
3.2.7 GANs for human action recognition
3.2.8 Other deep architectures for human action recognition
3.3 Discussion
3.3.1 Current state of deep learning architectures for action recognition
3.3.2 A quantitative analysis on HMDB-51, UCF-101 and NTU+RGB-D
3.3.3 The future of deep learning for video-based human action recognition
3.4 Conclusion
4 Proposed Deep Learning-based Approach for 3D Human Action Recognition from Skeletal Data Provided by RGB-D Sensors
4.1 Learning and recognizing 3D human actions from skeleton movements with Deep Residual Neural Networks
4.1.1 Introduction
4.1.2 Related work
4.1.3 Proposed method
4.1.4 Experiments
4.1.5 Experimental results and analysis
4.1.6 Conclusion
4.2 SPMF: A new skeleton-based representation for 3D action recognition with Inception Residual Networks
4.2.1 Introduction
4.2.2 Proposed method
4.2.3 Experiments
4.2.4 Experimental results and analysis
4.2.5 Processing time: training and prediction
4.2.6 Conclusion
4.3 Enhanced-SPMF: An extended representation of the SPMF for 3D human action recognition with Deep Convolutional Neural Networks
4.3.1 Introduction
4.3.2 Proposed method
4.3.3 Experiments
4.3.4 Experimental results and analysis
4.3.5 Conclusion
4.4 CEMEST dataset
4.4.1 Introduction to CEMEST dataset
4.4.2 Experiments on CEMEST
4.4.3 Experimental results
4.4.4 Conclusion
5 Deep Learning for 3D Pose Estimation and Action Recognition
5.1 Introduction
5.2 Related work
5.2.1 3D human pose estimation from a single RGB camera
5.2.2 3D pose-based action recognition from RGB sensors
5.3 Proposed method
5.3.1 Problem definition
5.3.2 Deep learning model for 3D human pose estimation from RGB images
5.3.3 Deep learning framework for 3D pose-based action recognition
5.4 Experiments
5.4.1 Datasets and settings
5.4.2 Implementation details
5.4.3 Experimental results and comparison
5.4.4 Computational efficiency evaluation
5.5 Conclusion
6 Conclusions and Perspectives
6.1 Discussion
6.2 Limitations
6.3 Future work
6.3.1 Recurrent Neural Networks with Long Short-Term Memory units
6.3.2 Temporal Convolutional Network
6.3.3 Multi-Stream Deep Neural Networks
6.3.4 Attention Temporal Networks
A Datasets
B Network Architectures
C Savitzky-Golay Smoothing Filter
D Degradation phenomenon in training very deep neural networks
E Version française résumée
F Curriculum Vitæ

projet fin d'etudeTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *