Low-rank and gradient histogram preserving model

Image denoising

Removing noise from images is an essential pre-processing step to many image analysis applications. The problem of image denoising can be defined formally as recovering the original image x from its noisy observation y = x + n, where n is a zero-mean additive noise vector (e.g., Gaussian, Laplacian, Rician, etc.). Approaches for this problem can be roughly divided in three categories: spatial domain, transform domain and learning-based methods (Katkovnik et al., 2010). Spatial domain methods leverage the correlations between local patches of pixels in an image. In such methods, pixel values in the denoised image are obtained by applying a spatial filter, which combines the values of candidate pixels or patches. A spatial filter is considered local if its support for a pixel is a distance-limited neighborhood of this pixel. Numerous local filtering algorithms have been proposed in the literature, including Gaussian filter, Wiener filters, least mean squares filter, trained filter, bilateral filter, anisotropic filtering and steering kernel regression (SKR) (Szeliski, 2010). Although computationally effective, local filtering methods do not perform well in the case of structured noise due to the correlations between neighboring pixels.

On the other hand, nonlocal filters like nonlocal means (NLM) (Buades et al., 2005a; Mahmoudi and Sapiro, 2005; Coupé et al., 2008; Wang et al., 2006) consider the information of possibly distant pixels in the image. Various works have shown the advantage of nonlocal filtering methods over local approaches in terms of denoising performance (Zimmer et al., 2008; Dabov et al., 2007; Mairal et al., 2009), in particular for high noise levels. However, nonlocal spatial filters may still lead to artifacts like over-smoothing. Unlike spatial filtering approaches, transform domain methods represent the image or its patches in a different space, typically using an orthonormal basis like wavelets (Luisier et al., 2007), curvelets (Starck et al., 2002) or contourlets (Do and Vetterli, 2005). In this transform space, small coefficients correspond to high frequency components of the image which are related to image details and noise. By thresholding these coefficients, noise can be removed from the reconstructed image (Donoho, 1995). Compared to spatial domain approaches, transform domain methods like wavelets better exploit the properties of sparsity and multi-resolution (Pizurica et al., 2006).

However, these methods employ a fixed basis which may not be optimal for a given type of images. Recent research has focused on defining the transform basis in a data-driven manner, using dictionary learning (Elad and Aharon, 2006; Mairal et al., 2009; Dong et al., 2011a). Although many denoising approaches based on dictionary learning are now considered state-of-the-art, these approaches are often computationally expensive. Finally, denoising methods based on statistical learning model noisy images as a set of independent samples following a mixture of probabilistic distributions such as Gaussians (Awate and Whitaker, 2006). Mixture parameters are typically inferred from data using an iterative technique like the expectation maximization algorithm. However, these methods are sensitive to outliers (i.e., pixels with high noise values), which affect the parameter inference step. Various techniques have been proposed to deal with this problem. In (Portilla et al., 2003), scale mixtures of Gaussians are applied in the wavelet domain for greater robustness. Moreover, a Bayesian framework is presented in (Dong et al., 2014b), which extends Gaussian scale mixtures using simultaneous sparse coding (SSC).

Image completion

Image completion or inpating is another important problem in image processing and low level computer vision, which consists in recovering missing pixels or regions in an image. Let Ω 20 be the set of observed pixels (i.e., the mask) in image y, the goal is to recover the full image x under the constraint that PΩ(x) = PΩ(y), where PΩ denotes the operator projecting over elements in Ω. In the generative model of Eq. (1.2), the degradation operator Φ corresponds to a diagonal matrix such that Φii = 1 if pixel i ∈ Ω, else Φii = 0. Over the years, a flurry of studies have aimed at solving the problem of image completion (Chierchia et al., 2014; He and Wang, 2014; Heide et al., 2015; Ji et al., 2010; Zhang et al., 2012, 2014a; Li et al., 2016; Kwok et al., 2010). Approaches for this task can be classified as structure-based, texture-based or low-rank approximation-based methods. Structure-based methods focus on the continuity of geometrical structures in the image, and attempt to fill-in missing structures in a way that is consistent with the rest of the image. Approaches in this category include partial differential equation (PDE) or variational-based methods (Masnou, 2002), convolutions (Richard and Chang, 2001), and wavelets (Chan et al., 2006; He andWang, 2014). Because they focus on structure, however, such approaches are usually unable to recover large regions or regions with complex textures. In contrast, texture-based regions address the image completion task via a process of texture synthesis. Statistical texture synthesis approaches extract features from pixels surrounding the missing region to build a statistical model of texture (Levin et al., 2003; Portilla and Simoncelli, 2000).

This model is then used to generate a texture for the missing region that has the same visual appearance as the available textures. Methods based on textures can operate at the pixel or patch level. Pixel-based textural inpainting techniques generate missing pixels one-byone, using techniques like Markov Random Fields (MRF) to ensure consistency with neighbor pixels (Efros and Leung, 1999; Tang, 2004). Patch-based or examplar-based techniques (Criminisi et al., 2004; Drori et al., 2003; Kwok et al., 2010) preserve the consistency of the missing region by reconstructing it patch by patch, as opposed to pixel by pixel. The key idea of such techniques is to find candidate patches from the image and combine them to fill-in the missing region. This process is typically applied iteratively, until the filled region is consistent internally and with surrounding pixels (Criminisi et al., 2004). In general, the quality of results depends on various factors such as patch size, patch matching algorithm, patch filling priority, etc. However, unlike pixel-based approaches, image completion methods using patches can leverage nonlocal patterns in the image to obtain a higher performance. The last category of image completion methods are based on low-rank approximation.

The methods stem from recent advances in the fields of matrix completion (Zhang et al., 2012; Wright et al., 2009; Eriksson and van den Hengel, 2012; Buchanan and Fitzgibbon, 2005; Eriksson and Van Den Hengel, 2010; Candes and Recht, 2012; Cai et al., 2010) and tensor completion (Romera-Paredes and Pontil, 2013; Tomioka et al., 2010;Weiland and Van Belzen, 2010; Liu et al., 2013b). The general principle of these approaches is to divide the image into even-size sub-regions (i.e., patches), in such way that some patches contain both observed and missing pixels. Patches are then stacked into a matrix/tensor, and those with missing pixels are recovered by solving a matrix/tensor completion problem. For instance, in (Li et al., 2016), a low-rank matrix approximation technique is combined with a nonlocal autoregressive model to reconstruct image patches efficiently. Moreover, a truncated nuclear norm regularization technique is proposed in (Zhang et al., 2012), which can reconstruct patches with a higher accuracy by considering only a small number components (i.e., singular vectors).

Super-resolution

In super-resolution (SR), the degradation operator Φ corresponds to a down-sampling matrix and the problem is to recover the high-resolution image x from its low-resolution version y. Hence, this task is often considered as interpolation. Image super-resolution is essential to enhance the quality of images captured with low-resolution devices, and has become a popular research area since the preliminary work of Tsai and Huang (Tsai and Huang, 1984). Numerous techniques have been proposed for this task over the last years, stemming from signal processing and machine learning. Based on the number of observed low-resolution images, these techniques can be separated into single-frame or multi-frame methods. Single-frame methods (Glasner et al., 2009; Yang et al., 2010a; Bevilacqua et al., 2012; Zeyde et al., 2010) typically employ a learning algorithm to reconstruct the missing information of super-resolved images based on the relationship between low- and high-resolution images in a training dataset.

In contrast, multiple-image SR algorithms (Capel and Zisserman, 2001; Li et al., 2010) usually suppose some geometric relationship between the different views, which is then used to reconstruct the super-resolved image. SR methods can also be grouped based on whether they work in the spatial domain or a transform domain (e.g., Fourier (Gunturk et al., 2004; Champagnat and Le Besnerais, 2005) or wavelets (Zhao et al., 2003; Ji and Fermüller, 2009)). SR methods in the spatial domain are numerous and include techniques based on iterative back projection (Zomet et al., 2001; Farsiu et al., 2003), non-local means (Protter et al., 2009), MRFs (Rajan and Chaudhuri, 2001; Katartzis and Petrou, 2007), and total variation (Farsiu et al., 2004; Lian, 2006). Patch-based SR methods address the problem by learning a redundant dictionary for highresolution patches, and aggregating the reconstructed high-resolution patches into a superresolved image (Freeman et al., 2000; Chang et al., 2004; Yang et al., 2010a; Bevilacqua et al., 2012; Zeyde et al., 2010; Timofte et al., 2013). Recently, deep-learning SR techniques like convolutional neural networks (CNN) (Dong et al., 2016; Kim et al., 2016) have gained a tremendous amount of popularity. Such techniques learn an end-to-end mapping between low resolution and high-resolution images, composed of sequential layers of non-linear operations (e.g., convolution, spatial pooling, rectification, etc.). The main drawback of such techniques is their requirement for large volumes of training data, and their tendency to overfit the training dataset.

Table des matières

INTRODUCTION
0.1 Problem statement and motivation
0.2 Research objectives and contributions
0.3 Thesis outline
CHAPTER 1 LITERATURE REVIEW
1.1 Key concepts
1.2 Image priors
1.2.1 Structure-based priors
1.2.2 Histogram priors
1.2.3 Sparse representation priors
1.2.4 Nonlocal self-similarity priors
1.3 Reconstruction problems
1.3.1 Image denoising
1.3.2 Image completion
1.3.3 Super-resolution
1.3.4 Compressed sensing
1.4 Summary
CHAPTER 2 STRUCTURE PRESERVING IMAGE DENOISING BASED ON LOWRANK RECONSTRUCTION AND GRADIENT HISTOGRAMS
2.1 Abstract
2.2 Introduction
2.3 Related work
2.4 The proposed method
2.4.1 Low-rank reconstruction
2.4.2 Low-rank and gradient histogram preserving model
2.4.3 Optimization method for recovering the image
2.5 Experiments
2.5.1 Parameter setting
2.5.2 Evaluation on benchmark images
2.5.3 Evaluation on texture images
2.5.4 Impact of weighted nuclear norm
2.5.5 Impact of gradient histogram preservation
2.5.6 Computational efficiency
2.6 Conclusion
CHAPTER 3 HIGH-QUALITY IMAGE RESTORATION USING LOW-RANK PATCH REGULARIZATION AND GLOBAL STRUCTURE SPARSITY
3.1 Abstract
3.2 Introduction
3.3 Related work
3.4 The proposed image restoration model
3.4.1 Low-rank reconstruction of similar patches
3.4.2 Global sparse structure regularization
3.4.3 Image reconstruction combining both priors
3.5 Efficient ADMM method for image recovery
3.6 Experiments
3.6.1 Parameter setting and performance metrics
3.6.2 Random pixel corruption
3.6.3 Text corruption
3.6.4 Image super-resolution
3.6.5 Parameter impact
3.7 Conclusion
CHAPTER 4 ATLAS-BASED RECONSTRUCTION OF HIGH PERFORMANCE BRAIN MR DATA
4.1 Abstract
4.2 Introduction
4.3 The proposed method
4.3.1 Probabilistic atlas of gradients
4.3.2 Sparse dictionaries of NSS patches
4.3.3 Recovering the image
4.3.4 Algorithm summary and complexity
4.4 Experiments
4.4.1 Evaluation methodology
4.4.2 Impact of the atlas-weighted TV prior
4.4.3 Comparison to baseline approaches
4.4.4 Comparison to state-of-the-art
4.5 Conclusion
CHAPTER 5 CONCLUSION
5.1 Summary of contributions
5.2 Limitations and recommendations
BIBLIOGRAPHY

Cours gratuitTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *