#### Read Microsoft Word - vpqm_final4.doc text version

OBJECTIVE IMAGE QUALITY ASSESSMENT WITH SINGULAR VALUE DECOMPOSITION Manish Narwaria and Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore, 639798 Email: {mani0018, wslin}@ntu.edu.sg

ABSTRACT In this paper, we develop a new image quality assessment scheme with the use of Singular Value Decomposition (SVD). In a previous work [1], only the singular values out of SVD were used for quality assessment. We show that singular vectors are even more important than singular values for quality assessment. Hence, we use both singular vectors and values for comprehensive image quality evaluation, and demonstrate that the quality prediction is significantly improved. Extensive experimental results are reported as evidence of the efficiency of the proposed scheme, in comparison with the relevant existing methods, for three open and independent datasets with totally 2647 distorted images. 1. INTRODUCTION Auto-assessing the quality of digital image/video is one of the challenging problems in image and video processing and many practical situations, such as process evaluation, optimization (like for video encoder) and monitoring (e.g., in transmission and manufacturing sites). In addition, how to evaluate picture quality plays a central role in shaping most (if not all) visual processing algorithms and systems, as well as their implementation. With regards to visual quality assessment, the findings from psychovisual research provide some useful insights. It is well known that human visual system (HVS) is sensitive to spatial frequency and structures i.e. the HVS is highly adapted to extract structural information from visual scenes [12]. Therefore, during the recent years there has been a growing interest to take image structure into account for picture quality evaluation [12, 16, 17], because structural properties play a big role in the human perception and image recognition. A well cited perceptual metric based upon structure has been proposed by Wang and Bovik: firstly as the universal image quality index (UQI) [17] and then its improved form known as the structural similarity index (SSIM) [12]. In this paper, we use the method of Singular Value Decomposition to develop an image quality metric which is based on the loss of image structure and texture. Extensive experimental results on 3 open and independent databases are reported as evidence of the efficiency of the proposed scheme. 2. IMAGE ANALYSIS WITH SINGULAR VALUE DECOMPOSITION Singular value decomposition of image, A (of size r x c), can be written as A = U VT ... (1) where U, V and represent the left singular vector matrix, the right singular vector matrix, and the diagonal matrix of singular values: U = [u1 u2 ...ur] V = [v1 v2 ...vc] =diag (1, 2... t) where ui and vk are column vectors while i is a singular value (i=1,2,...,r and k=1,2,...c); and 1> 2>...> t with t= min(r,c). In the existing MSVD metric [1], the change in has been used to evaluate perceptual quality. This is reasonable since singular values have been shown to be reasonably effective for image texture classification [2]. However, the MSVD metric ignores the singular vectors U, V which better represent image structure. The matrix UVT can be termed as the ensemble of the basis images and be used as a structural or geometrical representation of the image. Figure 1 shows the geometrical framework denoted by U and V matrices. It is known from the perturbation analysis [3, 4] that U and V are sensitive to perturbations and can therefore be used to estimate the changes in the image (and thus, evaluate its quality). As can be seen from the images in Figure 1, adding distortion leads to quality degradation and the same is captured effectively in the distorted geometry denoted by U and V. For instance, in Fig. 1 (c), Gaussian blurring of the original image damages the structure (especially the edges

are damaged in this case). The corresponding loss of the structure is well captured by the perturbed (blurred) singular vectors as shown in Fig. 1 (c1). Other types of distortions (perturbations) also affect the image structure/texture in different ways. The changes in U and V account for the structural degradations and thus provide an effective basis for assessing visual quality. Furthermore, Ref [5] demonstrates that the use of U and V improves the performance of an SVD based face recognition system. This is because U and V contain more information about image structure and geometry as compared to . In addition, we have conducted experiments and found that compared to changes in , the changes in U and V have more effect on the perceptual visual quality. An example is demonstrated on an image from the LIVE [8] dataset in Figure 2. We constructed images by perturbed U and V combined with unperturbed (i.e. original) , and vice-versa. We can see that images with perturbed U and V are perceptually more degraded than images with perturbed . We can note that images with unperturbed U, unperturbed V and perturbed always have higher PSNR value than the ones with perturbed U, perturbed V and unperturbed . This indicates that perturbation of U and V has a larger effect on the visual quality than the perturbation of . Therefore, the changes in U and V must be considered to assess the visual quality. U, V and contain complete information about the image and quality degradation is reflected by their changes. This forms the basis of image noise filtering methods [6, 7] which are based on the idea of eliminating the changes in U, V and to restore perceptual quality. This indicates that the changes in U, V and provide meaningful information about the visual quality. In [15] singular vectors were shown to be effective for image quality assessment. In this paper, we combine the changes in U, V and since this will provide a more comprehensive and effective premise for visual quality assessment in comparison to using singular vectors (see [15]) or singular values alone (see [1]). 3. THE PROPOSED VISUAL QUALITY METRIC We decompose original image A (of size r x c) using (1) and the perturbed image A(p) (also of size r x c) as A(p) = U(p) (p) V(p) T where U(p) , V(p) , (p) denote the left, right singular vectors and singular value matrix respectively for A(p) . Our aim is to measure the deviations introduced in the singular vectors due to distortions.

We calculate

j = uj . uj(p) ...(2) j = vj . vj(p) .....(3)

where j (j=1 to t) represents the dot product between the unperturbed and the perturbed jth left (uj and uj(p)) singular vectors and j denotes that for the right singular vectors (vj and vj(p)). Let j = j + j ... (4) The numerical measure for change in U and V is expressed as qs = {(

t

j =1

jm)}1/m

... (5)

where m (>1) is a control parameter. A larger m puts more emphasis on large j value. We have used m = 2 in the experiments in this study. Since the magnitude of any singular vector is always unity whether perturbed or not (as aforementioned), measuring the angular deviation between the perturbed and the unperturbed singular vectors is an intuitively satisfying way of characterizing structural distortions. We can see that qs approaches 0 for large perturbations and equals 2 x (min(r, c))1/m for perfectly matched images. Thus if r = c = 512 and m = 2 the maximum value of qs would be 2*512=45.2548. Since the dynamic range for qs is large (for example [0, 45.2548] for the aforementioned case) in general, we use a logarithmic scale to reduce the dynamic range of qs and define QS = ln (1+qs) ... (6) where the inclusion of the constant of 1 avoids the infinite value when qs approaches 0. It may be noted that qs and Qs represent the same quantity though on different scale. For the measuring the change in singular values, we use the existing MSVD metric [1], in which the difference between singular values of the reference and distorted image blocks is calculated:

B

QL =

j =1

| D

j

- D mid | B

... (7)

where for each jth block we calculate Dj

=

{

i =1

b

i

- i( p ) }2

Original image

Original U, V

(a) JPEG image

(b) Noisy image

(c) Blurred image

(a1) JPEG U, V

(b1) Noisy U, V Figure 1 Structure denoted by U, V in images

(c1) Blurred U, V

where b defines the block size (in an image with size of r x c), B = r/b x c/b, and Dmid represents the midpoint of the sorted Dj's. For the experimental results reported in this paper, we have used the commonly used block size of 8x8 i.e. b=8. We now combine QS and QL to obtain the overall quality metric. It is found that a linear combination yields satisfactory performance as demonstrated by the experimental results. Moreover, combining QS and QL linearly keeps computational requirements to a minimum. The overall quality metric Q is therefore defined as

Q = QS - µ QL

... (8)

where µ is a user defined parameter which is chosen empirically (in this paper we have used µ = 5) and the negative sign is to accommodate the opposite trend of change of QS and QL. Q, as defined by (8), can be considered as a composite measure of image quality. 4. EXPERIMENTAL RESULTS We have evaluated the performance of the proposed metric on 3 open and independent datasets with a total of 2647

(a) PSNR= 17.29

(a1) PSNR= 19.74

(a2) PSNR= 23.46

(b) PSNR= 21.42

(b1) PSNR= 21.80

(b2) PSNR= 34.69

Figure 2. Images (a), (b) are with WGN, JPEG distortions respectively, images (a1), (b1) are with perturbed U, V and original , images (a2), (b2) are with original U, V and perturbed . distorted images with varied distortion types have been evaluated as way to demonstrate the effectiveness and generality of the proposed scheme. A brief description of the datasets is given below. · The LIVE [8] database contains 779 distorted images with five distortions type: JPEG, JPEG2000, white noise, Gaussian blur and Fastfading. Subjective evaluations were converted to difference scores (between the test and the reference) and then Z-scores and then scaled and shifted to the full range (1 to 100) to compute the Difference Mean Opinion Scores (DMOS) for each image. The Toyama subjective database [9] contains 182 images of 768x512 pixels. Out of all, 14 were original images (24 bit/pixel RGB) in each group. The rest of the images were JPEG and JPEG2000 coded images (i.e. 84 compressed images for each type of distortion). Six quality scales and six compression ratios were respectively selected for the JPEG and JPEG2000 encoders. · The TID dataset [10] is the largest and the most comprehensive dataset available for testing image quality metrics. It consists of 25 original reference images which have been processed by 17 different types of distortions which are listed in Table I. There are 4 distortion levels and thus it consists of a total of 1700 (25 x 17 x 4) distorted images.

·

The performance of Q is compared with SSIM [12], IFC [13], MSVD [1] and VSNR [14]. A nonlinear mapping between the objective model outputs and the subjective quality ratings was also employed [11]: we fitted objective scores to subjective scores via a fourparameter cubic polynomial a1x3+a2x2+a3x+a4 where a1, a2, a3 and a4 are determined by using the subjective scores and the objective outputs. The Pearson correlation coefficients, Spearman correlation coefficients and Root Mean Square Error (RMSE) are shown in Figure 3. We can see that Q generally performs better than the other metrics. To assess the statistical significance of each metric's performance relative to the other metrics, an F-test was

performed on the prediction residuals between the objective predictions (after non-linear mapping) and the subjective scores. Obviously smaller the residuals, the better the metric is. The test is based on an assumption of Gaussianity of the residual differences. Let 2X and 2Q denote the variances of the residuals from metrics X and Q respectively, and then the F-statistic with respect to metric Q is given by F= 2X / 2Q. When F> Fcritical metric X has significantly larger residuals than metric Q at a given confidence level. Likewise when F < 1/Fcritical then metric Q has significantly larger residuals than metric X at that confidence level. Fcritical is computed based on the number of residuals and the confidence level. For the experimental results reported in this paper, we have used a 99% confidence level for the calculation of Fcritical values. The values of F > Fcritical indicated by boldfaced letters in Table II denote that Q has significantly smaller residuals than the corresponding metric and so Q performs statistically better than that metric. The F statistics for each metric's residuals tested against Q's residuals are reported in Table II. Values of F shown in boldface signify that with 99% confidence the metric has significantly larger residuals than Q. Since the MSVD method [1] uses only , it is worth pointing out that Q is statistically better than MSVD with all databases; this demonstrates the effectiveness of incorporating U and V. We can also see that there is big margin for F to be compared with 1/Fcritical (even for the three cases in Table II, in which F<Fcritical). The experimental results and the related statistical analysis therefore confirm that the use of U and V along with improves the quality prediction performance significantly. 5. CONCLUSIONS In this paper, we have proposed a scheme for effective image quality assessment with SVD. The major contribution is the use and justification of singular vectors together with singular values to gauge the changes in images due to distortions and hence assess their perceptual quality, better in line with subjective viewing groundtruth. A major advantage of the proposed scheme is that the proposed metric is general and effective in assessing visual quality of images with diverse distortion types as demonstrated through the experimental evaluation of 3 independent and comprehensive databases. The proposed scheme is found to perform better than the relevant existing metrics as shown by the experimental results and the related statistical analysis.

No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Table I Distortion types in TID dataset Type of distortion Additive Gaussian noise Different additive noise in color components Spatially correlated noise Masked noise High frequency noise Impulse noise Quantization noise Gaussian blur Image denoising JPEG compression JPEG2000 compression JPEG transmission errors JPEG2000 transmission errors Non eccentricity pattern noise Local block-wise distortions of different intensity Mean shift (intensity shift) Contrast change

Table II F statistics of different metrics with respect to Q Dataset/ LIVE Toyama TID Metric SSIM 1.28 1.75 1.52 IFC 0.91 1.42 1.69 MSVD (QL) 1.55 1.61 1.50 VSNR 1.09 1.17 1.48 Q 1 1 1 Fcritical 1/ Fcritical 1.18 0.84 1.41 0.70 1.12 0.89

6. REFERENCES

[1] A.M. Eskicioglu, A. Gusev, and A. Shnayderman, "An SVD-Based Gray-Scale Image Quality Measure for Local and Global Assessment," IEEE Trans. Image Processing, vol. 15, no. 2, pp. 422-429, 2006. [2] A. T. Targhi and A. Shademan, "Clustering of singular value decomposition of image data with applications to texture classification," in Proc. SPIE Visual Communications and Image Processing, 2003, vol. 5150, pp. 972-979, Lugano, Switzerland, July 2003. [3] G. W. Stewart, "Stochastic Perturbation Theory", SIAM Review, Vol. 32, No.4 pp. 579-610, 1990. [4] J. Liu, X. Liu, X. Ma, " First Order Perturbation Analysis of Singular Vectors in Singular Value Decomposition" IEEE Trans. on Signal Processing, vol. 56, no. 7, pp. 30443049, July 2008.

0.95 SSIM 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 LIVE Toyama TID IFC MSVD VSNR Q

0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 LIVE Toyama

SSIM IFC MSVD VSNR Q

TID

(a)

1.1 1 0.9 0.8 0.7 0.6 0.5 Toyama TID SSIM IFC MSVD VSNR Q

(b)

12 11.5 11 10.5 10 9.5 9 8.5 8 7.5 7 LIVE

SSIM IFC MSVD VSNR Q

(c)

(d)

Figure 3 (a) Comparison of Pearson correlation coefficients, (b) Spearman correlation coefficients (c) Comparison of Root Mean Square Error for Toyama and TID datasets (d) Comparison of Root Mean Square Error for LIVE dataset

[5] Yuan Tian, Tieniu Tan, Yunhong Wang, Yuchun Fang "Do singular values contain adequate information for face recognition?", Pattern Recognition 36 (2003),pp. 649 655. [6] Z. Devci´c, S. Loncari´c, "SVD block processing for nonlinear image noise filtering", J. of Computing and Information Technology, 7(3) (1999) 255259. [7] W. Qi, A. Morimoto, R. Ashino and R. Vaillancourt, "Image denoising using spline and block singular value decomposition", Scientific Proceedings of Riga Technical University, 21 (2004), pp. 3646. [8] H. R. Sheikh, Z. Wang, A. C. Bovik, and L. K. Cormack. Image and Video Quality Assessment Research at LIVE. [Online] Available: http://live.ece.utexas.edu/research/quality/ [9] Y. Horita, Y. Kawayoke, and Z. M. Parvez Sazzad, "Image quality evaluation database", http://160.26.142.130/toyama_database.zip. [10] Ponomarenko N., Carli M., Lukin V., Egiazarian K., Astola J., Battisti F. "Color Image Database for Evaluation of Image Quality Metrics, Proc. of Intern. Workshop on Multimedia Signal Processing, Australia, pp. 403-408., Oct. 2008. [11] VQEG, Final Report from the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment, Phase II August 2003 [Online]. Available: http://www.vqeg.org. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Tran.s Image Process, vol. 13, no. 4, pp. 600612, Apr. 2004. H. R. Sheikh, A. C. Bovik, and G. de Veciana, "An information fidelity criterion for image quality assessment using natural scene statistics," IEEE Trans. Image Process., vol. 14, no. 12, pp. 21172128, Dec. 2005. Damon M. Chandler and Sheila S. Hemami," VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images" IEEE Trans. Image Processing, vol. 16, no. 9, pp. 2284-2298, 2007. Manish Narwaria and Weisi Lin, " Scalable Image Quality Assessment based on Structural Vectors" in Proc. of IEEE International Workshop on Multimedia Signal Processing (MMSP'09), October 5-7, 2009, Rio de Janeiro, Brazil. Z. Wang and A. C. Bovik. Modern Image Quality Assessment. Morgan & Claypool Publishers, 2006. Z. Wang and A. Bovik, "A universal image quality index," IEEE Signal Processing Letters, Vol. 9, No. 3, pp. 81-84, March 2002.

[12]

[13]

[14]

[15]

[16] [17]

#### Information

#### Report File (DMCA)

Our content is added by our users. **We aim to remove reported files within 1 working day.** Please use this link to notify us:

Report this file as copyright or inappropriate

77791

### You might also be interested in

^{BETA}