Full-reference image quality assessment

Full-reference image quality assessment

As illustrated in section Introduction, FR-IQA models evaluate the perceptual quality of a distorted image with respect to its reference pristine-quality image. The following factors are indicators of a good FR-IQA model. FR-IQA model should provide high correlation with subjective ratings, have low complexity, provide accurate local quality map, and have mathematical properties like convexity and differentiability. Existing FR-IQA models barely satisfy aforementioned factors altogether (Bae & Kim, 2016a).

The mean square error (MSE) and peak-signal to noise ratio (PSNR) are the most widely used FR-IQA metrics because of their simplicity and efficiency.

In fact, the most successful IQA models in the literature follow a top-down strategy. They calculate a similarity map and use a pooling strategy that converts the values of this similarity map into a single quality score. Different feature maps are used in the literature for calculation of this similarity map. Feature similarity index (FSIM) uses phase congruency and gradient magnitude features. The pooling stage is also done based on phase congruency. FSIMc is an extension of FSIM with an added chromatic term to measure color distortions. GS (Liu et al., 2012) uses a combination of some designated gradient magnitudes and image contrast for this end, while the GMSD (Xue et al., 2014b) uses only the gradient magnitude. SR_SIM (Zhang & Li, 2012) uses saliency features and gradient magnitude. VSI (Zhang et al., 2014) likewise benefits from saliency-based features and gradient magnitude. SVD based features (Shnayderman et al., 2006), features based on the Riesz transform (Zhang et al., 2010), features in the wavelet domain (Chandler & Hemami, 2007; Li et al., 2011; Sampat et al., 2009; Rezazadeh & Coulombe, 2013) and sparse features (Chang et al., 2013) are used as well in the literature.

Among these features, gradient magnitude is an efficient feature, as shown in (Xue et al., 2014b). In contrast, phase congruency and visual saliency features in general are not fast enough features to be used. Therefore, the features being used play a significant role in the efficiency of IQAs.

As we mentioned earlier, the computation of the similarity map is followed by a pooling strategy. The state-of-the-art pooling strategies for perceptual image quality assessment (IQA) are based on the mean and the weighted mean (Wang et al., 2004, 2003; Wang & Li, 2011; Liu et al., 2012; Zhang et al., 2011, 2010). They are robust pooling strategies that usually provide a moderate to high performance for different IQAs. Minkowski pooling (Wang & Shang, 2006), local distortion pooling (Wang & Shang, 2006; Moorthy & Bovik, 2009a; Larson & Chandler, 2010), percentile pooling (Moorthy & Bovik, 2009b) and saliency-based pooling (Zhang & Li, 2012; Zhang et al., 2014) are other possibilities. Standard deviation (SD) pooling was also proposed and successfully used by GMSD (Xue et al., 2014b). The image gradients are sensitive to image distortions. Different local structures in a distorted image suffer from different degrees of degradations. This is the motivation that the authors in (Xue et al., 2014b) used to explore the standard variation of the gradient based local similarity map for overall image quality prediction. In general, features that constitute the similarity map and the pooling strategy are very important factors in designing high performance IQA models.

FR quality assessment of tone-mapped images

Tone-mapping operators have been used to convert HDR images into their LDR associated images for visibility purposes on non-HDR displays. Unfortunately, TMO methods perform differently, depending on the HDR image to be converted, which means that the best TMO method must be found for each individual case. A survey of various TMOs for HDR images and videos is provided in (Yeganeh & Wang, 2013a) and (Eilertsen et al., 2013). Traditionally, TMO performance has been evaluated subjectively. In (Ledda et al., 2005), a subjective assessment was carried out using an HDR monitor. Mantiuk et al. (Mantiuk et al., 2005) propose an HDR visible difference predictor (HDR-VDP) to estimate the visibility differences of two HDR images, and this tool has also been extended to a dynamic range independent image quality assessment (Aydin et al., 2008). However, the authors did not arrive at an objective score, but instead evaluated the performance of the assessment tool on HDR displays. Although subjective assessment provides true and useful references, it is an expensive and time-consuming process. In contrast, the objective quality assessment of tone mapping images enables an automatic selection and parameter tuning of TMOs (Yeganeh & Wang, 2010; Ma et al., 2014). Consequently, objective assessment of tone-mapping images, which is proportional to the subjective assessment of the images, is currently of great interest.

Recently, an objective index, called the tone mapping quality index (TMQI) was proposed by (Yeganeh & Wang, 2013a) to objectively assess the quality of the individual LDR images produced by a TMO.The TMQI is based on combining an SSIM-motivated structural fidelity measure with a statistical naturalness: TMQI(H,L)=a[S(H,L)]α + (1−a)[N(L)]β.

where S and N denote the structural fidelity and statistical naturalness, respectively. H and L denote the HDR and LDR images. The parameters α and β determine the sensitivities of the two factors, and a (0 ≤ a ≤ 1) adjusts their relative importance. Both S and N are upper bounded by 1, and so the TMQI is also upper bounded by 1 (Ma et al., 2014). Although the TMQI clearly provides better assessment for tone mapped images than the popular image quality assessment metrics, like SSIM (Wang et al., 2004), MS-SSIM (Wang et al., 2003), and FSIM (Zhang et al., 2011), its performance is not perfect. Liu et al. (Liu et al., 2014b) replaced the pooling strategy of the structural fidelity map in the TMQI with various visual saliency-based strategies for better quality assessment of tone mapped images. They examined a number of visual saliency models and conclude that integrating saliency detection by combining simple priors (SDSP) into the TMQI provides better assessment capability than other saliency detection models.

Table des matières

INTRODUCTION
0.1 Problem statement
0.2 Contributions
0.3 Outline of the thesis
CHAPTER 1 LITERATURE REVIEW
1.1 Full-reference image quality assessment
1.1.1 FR quality assessment of tone-mapped images
1.2 No-reference image quality assessment
1.2.1 NR-IQA of JPEG compressed images
1.2.2 RR and NR-IQA of contrast distorted images
1.3 Overview on color to gray conversion methodologies
CHAPTER 2 GENERAL METHODOLOGY
2.1 Research objectives
2.1.1 Objective 1: Develop an effective, efficient and reliable fullreference IQA model with new features and pooling strategy
2.1.2 Objective 2: Develop a full-reference IQA model for tonemapped images
2.1.3 Objective 3: Develop a parameterless no-reference IQA model for JPEG compressed images which is robust to block size and misalignment
2.1.4 Objective 4: Propose highly efficient features and develop efficient NR-IQA metric for assessment and classification of contrast distorted images
2.1.5 Objective 5: Propose a perceptually consistent highly efficient color to gray image conversion method
2.2 General approach
2.2.1 New full-reference image quality assessment metrics
2.2.2 Efficient no-reference image quality assessment metrics
2.2.3 Efficient perceptually consistent color to gray image conversion
CHAPTER 3 MEAN DEVIATION SIMILARITY INDEX: EFFICIENT AND RELIABLE FULL-REFERENCE IMAGE QUALITY EVALUATOR
3.1 Introduction
3.2 Mean Deviation Similarity Index
3.2.1 Gradient Similarity
3.2.2 The Proposed Gradient Similarity
3.2.3 Chromaticity Similarity
3.2.4 Deviation Pooling
3.2.5 Analysis and Examples of GCS Maps
3.3 Experimental results and discussion
3.3.1 Performance comparison
3.3.2 Visualization and statistical evaluation
3.3.3 Performance comparison on individual distortions
3.3.4 Parameters of deviation pooling (ρ, q, o)
3.3.5 Summation vs. Multiplication
3.3.6 Parameters of model
3.3.7 Effect of chromaticity similarity maps CS and CS
3.3.8 Implementation and efficiency
3.4 Conclusion
3.5 Acknowledgments
CHAPTER 4 FSITM: A FEATURE SIMILARITY INDEX FOR TONEMAPPED IMAGES
4.1 Introduction
4.2 The proposed similarity index
4.3 Experimental results
4.4 Conclusion
4.5 Acknowledgments
CHAPTER 5 MUG: A PARAMETERLESS NO-REFERENCE JPEG QUALITY EVALUATOR ROBUST TO BLOCK SIZE AND MISALIGNMENT
5.1 Introduction
5.2 Proposed Metric (MUG)
5.2.1 Number of unique gradients (NUG)
5.2.2 Median of unique gradients (MUG)
5.2.3 Stable MUG (MUG+)
5.3 Experimental results
5.3.1 Complexity
5.4 Conclusion
5.5 Acknowledgments
CHAPTER 6 EFFICIENT NO-REFERENCE QUALITY ASSESSMENT AND CLASSIFICATION MODEL FOR CONTRAST DISTORTED IMAGES
6.1 Introduction
6.2 Proposed Metric (MDM)
6.3 Experimental results
6.3.1 Contrast distorted datasets
6.3.2 Objective evaluation
6.3.3 Contrast distortion classification
6.3.4 Parameters
6.3.5 Complexity
6.4 Conclusion
6.5 Acknowledgments
CHAPTER 7 CORRC2G: COLOR TO GRAY CONVERSION BY CORRELATION
7.1 Introduction
7.2 Proposed Decolorization method
7.3 Experimental results
7.3.1 Complexity
7.4 Conclusion
7.5 Acknowledgments
CHAPTER 8 GENERAL DISCUSSION
8.1 Efficient and reliable full-reference image quality assessment for natural, synthetic and photo-retouched images
8.2 Full-reference image quality assessment for tone-mapped images
8.3 Block-size and misalignment invariant no-reference image quality assessment model for JPEG compressed images
8.4 Efficient no-reference quality assessment and classification of contrast distorted images
8.5 Efficient color to gray image conversion by correlation
CONCLUSION

Cours gratuitTélécharger le document complet

 

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *