Printer Friendly

A no-reference sharpness metric based on structured ringing for JPEG2000 images.

1. Introduction

Images are usually degraded by various factors such as defocusing and compression. Thus, it is more and more necessary to assess the image quality. The most reliable approach of image quality assessment is in the subjective way. The mean opinion score method is commonly used. It is implemented by subjective rating followed by some statistical processes to derive the mean opinion score (MOS). However, the subjective assessment is time-consuming, costly, and impractical. Hence, recently, there has been an increasing interest from the research community and industry towards developing objective assessment techniques.

The objective metrics can be divided into three categories: full reference (FR), reduced reference (RR), and no reference (NR) [1]. FR utilizes all information of the reference image while RR uses the detected features. However, the reference image or its features cannot be obtained sometimes. NR needs no reference information. It is widely used and challenging.

As the data volume is increasing apace, the limitation of the bandwidth becomes critical. It is more necessary to compress images. Different compression techniques introduce very different distortions. The discrete cosine transform (DCT) [2] based techniques, for example, JPEG and MPEG, lead to blockiness, whereas the JPEG2000 compression [3,4] involving wavelet transform [5] mainly introduces blurring and ringing artifacts [6]. The particular interest of this work is NR sharpness assessment for JPEG2000 compressed images.

The existing metrics for JPEG2000 images can be generally classified into two categories. The first category is about metrics on general sharpness. They can be used in assessment of JPEG2000 images and Gaussian blurred images as well. The second category consists of metrics particularly designed for JPEG2000 images. In these metrics, the ringing effect is taken into account. We firstly give an overview of metrics of the first category. The metrics proposed in [7-11] try to evaluate the phase coherence. It is shown that exactly localized features such as step edges result in strong local phase coherence across scales and spaces in complex wavelet domain, but blur leads to loss of such coherence. The metrics proposed in [12-14] use the kurtosis information. The metrics based on perceptual blur [6,15-17] take edge spreading width to assess the image sharpness. Saad et al. [18] developed a general-purpose method for NR image quality assessment using natural scene statistics (NSS) model.

The metrics in the second category employ the characteristics of JPEG2000 images. Some metrics are based on the neural network. The metric proposed in [19] uses the probabilities of the coefficients being nonzero in different subbands as features. Sazzad et al. [20] used pixel distortions and edge information. The metric proposed in [21] extracts some local gradient distribution features. The feature extraction is built on calculating the degree of blur at the edges. Sheikh et al. [22] used the NSS model to quantify the departure of an image and to make the prediction about its quality. The metric in [23] utilizes a PCA method to extract features at each edge pixel. And the probabilities of a given edge pixel being "distorted" and "undistorted" are calculated.

Besides the network-based metrics, there are some metrics based on two-phase ringing estimation in the second category. The two-phase ringing estimation includes the ringing regions detection and the ringing annoyances estimation. Marziliano et al. [6] proposed an FR and NR blurring metric and an FR ringing metric for JPEG2000 images. The blurring metric measures the width of edges, and the ringing metric measures the oscillations around edges. Barland and Saadane proposed in [24] an NR metric for JPEG2000 images. It used a blurring measure, a ringing measure, and an edge measure. Liu et al. [25] proposed an NR metric for perceived ringing artifacts in images. Oguz et al. proposed in [26] a measure of visible ringing that captures the ringing artifacts around strong edges. The ringing measures in the Barland metric, Liu metric, and Oguz metric are derived from calculating the activity of ringing region, specifically, the local variance.

Some other works, such as [27, 28], mainly introduce filtering-based methods, for example, bilateral filter [29] and anisotropic diffusion [30], to conceal ringing artifacts. However, these methods do not aim at image quality assessment. The metric in [31] predicts several artifacts, such as blurring, noise, and ringing.

For the existing general sharpness metrics, ringing effects are not taken into account. They perform unsatisfactorily in moderate compression, for the ringing artifact is most visible in moderate compression. And the existing training-based metrics usually extract a set of features for training and prediction. Some already existing sharpness metrics may be directly adopted as features. It is computationally inefficient. In addition, they lack the modeling of image sharpness, which goes against further researches. And for the existing two-phase metrics, the property of ringing structure is not taken into account. As a result, ringing, noise, and textures are usually confused in estimation of ringing annoyance. Further, the detection of ringing region is not reliable unless the image is of simple scene and under low degradation.

In this paper, a novel metric is proposed to evaluate the sharpness of JPEG2000 images. It mainly used the ringing measure. To obtain the preliminary ringing map, we used the anisotropic diffusion. Then, the final ringing map is derived by considering the property of ringing structure. However, the ringing artifact may be concealed by extreme blur in highly compressed images. Thus, a blurring measure based traditional method is used for compensation. The complementarity between the ringing measure and the blurring measure is studied and the sharpness metric is derived.

The main contributions of this paper can be summarized as follows.

(i) We proposed a new method to detect the ringing artifacts. This method involves the anisotropic diffusion and a refining phase which uses the prior ringing structures and the HVS characteristic.

(ii) An NR sharpness metric is proposed. The metric mainly depends on the ringing measure, and it uses a blurring measure for compensation when the blur is highly severe (in highly compressed images). We show that the ringing term is sufficiently monotonous along with the perceptual sharpness of JPEG2000 images if the blurring is not so critical that ringing artifacts are concealed by the blurring.

This paper is organized as follows. Section 2 describes the proposed metric. The experimental results are illustrated in Section 3. And we conclude this paper in Section 4.

2. The Proposed Algorithm

JPEG2000 compression mainly introduces blurring and ringing artifacts. However, it is found in this paper that the ringing term is sufficiently monotonous along with the perceptual sharpness of JPEG2000 images if the image is under moderate compression. But the ringing artifact may be concealed by extreme blur in highly compressed images (detailed in Section 2.3). Thus, our metric mainly depends on the ringing measure, and it uses a blurring measure for compensation when the blur is highly severe (in highly compressed images).

For ringing measure, an anisotropic diffusion is employed to detect the preliminary ringing map. Then, a refinement based on ringing properties is applied and the final ringing map is derived. The ringing measure is obtained by a weighted summation. After the ringing measure, we compute the blurring measure based on the perceptual blur based metric [6]. Our metric is derived mainly depending on the ringing measure, while it uses the blurring measure for compensation in the case of severe blur. A block diagram summarizing the computation of the proposed sharpness metric is given in Figure 1.

2.1. Ringing Measure. An anisotropic diffusion is used for extracting the preliminary ringing map. Then, a refinement based on the property of ringing structure is employed to derive the final ringing map. The ringing measure is derived from a weighted summation of the final ringing map.

2.1.1. Preliminary Ringing Extraction with Anisotropic Diffusion. The anisotropic diffusion is applied to obtain the preliminary ringing map. Based on the property of anisotropic diffusion, the ringing artifacts are mainly filtered out, but the edge structures are retained. The anisotropic diffusion model proposed by Perona and Malik [30] is adopted. It is formulated as

[partial derivative]f/[partial derivative]t = [nabla] x (c (x, y, t) [nabla]f), (x, y, t) [member of] R2 x (0, [infinity]), F (x, y, 0) = f (x, y), (x, y) [member of] [R.sup.2], (1)

where the sign [nabla] is the gradient operator, f(x, y, 0) = f(x, y) is the original image, f(x, y, t) is the evolving image at time t, and c(x, y, t) is the diffusion function that is formulated as

c(x, y, t) = exp [(-([parallel][nabla][parallel]/k).sup.2]). (2)

Considering the general range of gradient magnitude in ringing regions, we set K = 20. It is set to a constant without adaptation with iterations as referred to in [32]. In fact, without adaptation, excessive smoothness may happen (see [32] for details). As a result, the preliminary ringing map may exceed the actual amount. However, excessive smoothness inclines to occur in high blurred regions, for the gradient in high blurred regions is relatively small, thus leading to a large diffusion function (close to 1 according to (2)). Excessive smoothness leads for more ringing artifacts to be detected (as mentioned above, the preliminary ringing map is obtained by subtracting the diffused image from the original one). It results in more close correlation of the proposed metric with human perception. Actually, the ringing measure does correlate well with the HVS perception of ringing artifacts by using this setting.

Image f(x, y) is used as the initial input f(x, y, 0). The evolving process is implemented as described in (1). As suggested by [30], a discrete scheme is adopted. It is formulated as

[f.sub.i+1] = [f.sub.i] + [lambda][partial derivative]f/[partial derivative]t[|.sub.t=i], i [member of] [N.sup.+], i < N, (3)

where i is the iteration index, [lambda] is the step length, N is the number of total iterations, and [N.sup.+] is the set of natural numbers. The diffused image [f.sub.N] is obtained as the output of the evolution. The preliminary ringing map d(x, y) is calculated as the difference between the original image and the diffused image

d(x, y) = f(x, y) = [f.sub.N] (x, y). (4)

Figure 2 shows a sample of the obtaining of the preliminary ringing map.

2.1.2. Refinement Based on Morphological Operation. In the preliminary ringing map, we found that, besides the traditional ringing artifacts around strong edges, there existed another type of ringing artifact. It appears at tiny structures in highly compressed images. This is caused by highly concealing effect at tiny structures. They are generally presented as horizontal or vertical tiny strips. Because they degrade the image quality, they can also be interpreted as ringing artifacts. And, in general, the traditional ringing artifacts around edges also consist of horizontal and vertical tiny strips. Figure 3(a) shows traditional ringing artifacts near strong edges and Figure 3(d) shows that the other "ringing" artifacts appear at tiny structures under high compression.

The preliminary ringing map d(x, y) contains not only ringing but also some image inherent textures and noise. A refinement is implemented using the property of ringing structure. The refinement employs a morphological opening operation. Generally, opening operation can be comprehended as an "extracting" process. It extracts image structures that contain the structuring element (SE) used in it. This process is roughly shown in Figure 3 with some typical samples.

Two SEs, the horizontal strip l and the vertical one l' (detailed subsequently), are used in the opening operation. The horizontal ringing map, [d.sub.1](x, y), is extracted as

[d.sub.1](x, y) = (max [(d, 0) [degrees] l).sub.x,y], (5)

where the symbol [degrees] indicates the opening operator. The function max(*) is used to remove the negative entries of the preliminary ringing map. The opening operation is implemented on the whole image (not on a single pixel), so we used the subscript index for the pixel coordinate in the equation.

Similarly, the vertical ringing structures are extracted as

[[d.sub.l], (x,y) = (max(d, 0)[degrees][l.sup.']).sup.x,y]. (6)

For each pixel (x,y), three cases exist. (1) It is on horizontal ringing artifacts; then, [d.sub.l](x, y) > 0 (the more visible the ringing artifact, the greater the value of [d.sub.l](x, y)). (2) It is on vertical ringing artifacts; then, [d.sub.l],(x, y) > 0. (3) It is a point where no ringing artifact exists; then, [d.sub.l](x, y) = 0 and [d.sub.l], (x, y) = 0. Hence, the final ringing map that contains ringing artifacts of both directions can be computed by choosing the entrywise maximum as

r (x, y) = max ([d.sub.l] (x, y), [d.sub.l], (x, y)). (7)

Examples of the extracted ringing artifacts are shown in Figures 3(c), 3(f), 3(i), and 3(l).

Determining the SE l and [l.sup.']. The SEs l and [l.sup.'] are used for extracting the horizontal and the vertical artifacts, respectively. Their widths are used as the "cut-off" periods of ringing structures. The property of human vision system (HVS) sensitivity with respect to spatial frequency is taken into account. Evidence from grating and other experiments suggests that HVS contains band-pass filters with a bandwidth of 1 octave [33]. The contrast sensitivity function (CSF) [34] shows a typical band-pass filter shape peaking at around 4 cycles per degree (cpd) with sensitivity dropping off either side of the peak. According to Rayleigh criterion [35], for the optical wavelength A w and the pupil diameter D, the limit of angular resolution 6 of the HVS is

[theta] [approximately equal to] 1.22[[lambda].sub.[omega]]/D. (8)

It is about 1.22 x 550 nm/2.5 mm = 2.684 x [10.sup.-4] rad corresponding to the frequency of about 60 cpd. However, from CSF, the contrast sensitivity at this frequency is so low that HVS can hardly sense any signal. Thus, we set the cut-off spatial frequency four times the limit resolution as [theta] [approximately equal to] 0.0011. The corresponding cut-off spatial period [tau] is

[tau] = [[theta].sub.[eta]], (9)

where [eta] is the viewing distance between an observer and the screen. For a general image height of 700 pixels, the viewing distance is typically about [eta] = 4200 pixels (six times the image height). Then, [tau] is about 4.62 pixels. The spatial period [tau] can be regarded as the characteristic width of the ringing period. We set [tau] = 5 for integralization. It is used as the width of SE l to extract the ringing structure with the spatial period larger than it by opening operation. The length of SE l should be greater than the width, and it is simply set to 7. Based on the property of morphological opening operation, the choosing of longer length will decrease the amount of detected ringing artifacts. Similarly, the SE [l.sup.'] is used to extract the vertical ringing structures. Hence, SE [l.sup.'] is directly assigned the transform of SE l, that is, the uniform array of size 7 x 5.

2.1.3. The Ringing Measure R. The ringing measure is derived from a weighted integral of the ringing map r(x, y) as

R = [integral][integral] [[omega].sub.R] (x, y) r (x, y) dx dy, (10)

where [[omega].sub.R](x, y) is a weighting matrix. It involves two factors. One is luminance contrast sensitivity saying HVS is sensitive to contrast instead of absolute luminance and the other is location weight motivated by HVS salience property that more attention is given to the center of an image. Let [[omega].sub.c](x, y) and [w.sub.l](x, y) be the contrast sensitive weighting matrix and the location weighting matrix (detailed subsequently); then, the weight [[omega].sub.R](x, y) is

[[omega].sub.R] (x, y) = [[omega].sub.c] (x, y) * [[omega].sub.1](x, y). (11)

The product is employed because it is the very form that the two subweights can act as weights individually.

Hereafter in this subsection, we give the details on the two subweighting matrixes [[omega].sub.c](x, y) and [[omega].sub.l](x, y). As Weber's Law [36] says, the just noticeable difference in terms of luminance between two regions is approximately proportional to the background luminance. The ringing map r(x, y) can be regarded as the luminance difference caused by ringing structures. And the local average luminance can be regarded as the background luminance. The background luminance [f.sub.b](x, y) is derived from a local average filtering; that is,

[f.sub.b] (x, y) = f(x, y)[cross product] s (x,y), (12)

where [cross product] is the convolution operator and s(%, y) is the filtering kernel. Specifically, s(%, y) is a disk patch with radius 5. With this radius, it is sizable enough (with the spatial extent 11) to cover the two adjacent ringing structures, which makes it suitable for calculating the background luminance. A larger radius is also workable but not efficient in computation.

From Weber's Law, r(x, y)/[f.sub.b](x, y) is the visibility index of ringing artifacts at (x, y). Considering this, as well as the form in (10) and (11), we adopt the reciprocal of the background luminance as the luminance contrast weight [[omega].sub.c] (x, y); that is,

[[omega].sub.c] (x,y) = (1/Z) fb.sup.-1 (x,y)> (13)

where Z = [integral][integral] [f.sub.b.sup.-1] (x, y)dx dy is the normalization factor.

For location weight [[omega].sub.l](x, y), the HVS salience property is taken into account. Specifically, [[omega].sub.l](x, y) is formulated as a 2D Gaussian function,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)

where ([x.sub.0], [y.sub.0]) is the image center and [[sigma].sub.x] and [[sigma].sub.y] are set to one-sixth times the image width and height, respectively. It ensures that the principal part ([-3[sigma] 3[sigma]]) of the weighting matrix is located exactly in image domain.

2.2. Blurring Measure B. A blurring measure based on perceptual blur [6] is employed. However, we detect the edge profile along edge normal instead of horizontal or vertical direction. And there is a modification that we do not use the whole edge transition but the middle section with absolute derivative larger than a proportion (a half) of that at the corresponding edge pixel. The blurring measure B is computed by averaging spreading widths at all edges (see [6] for details).

2.3. The Proposed Sharpness Metric. The blurring and ringing artifacts usually appear in JPEG2000 compressed images simultaneously, whereas we found that the ringing measure can be used to evaluate the sharpness normally (if it is not under extremely high compression). While, in extreme high compression, ringing artifacts are concealed by blurring (see Figure 3(g), for an example), the blurring measure is needed for compensation. For conciseness, we refer to these two cases as ring dominating and blur dominating, respectively. We take the blurring measure B as the criterion to distinguish one case from the other. In fact, the blurring measure B rises rapidly in blur-dominating case. It is not reliable in ringing-dominating case (normal compression) due to the disturbance of ringing artifacts. Let [omega](B) be a general term with regard to the blurring measure B. The proposed metric M is generally expressed as follows:

M = (1 - [phi](B)) x R + [phi](B) x [omega](B), (15)

where 0 < [phi](B) < 1 is an indicating function (described subsequently) with respect to B. A psychometric function, specifically a saturated exponential function, is employed for [phi]p(B) as

[phi](B) = 1 - exp (- (1/2)[(B/[B.sub.th]).sup.[beta]]), (16)

where [beta] is the index that controls the craggedness of [phi] when B crosses [B.sub.th]. It can be regarded as the soft Heaviside step function indicating the blur-dominating case (ringing is concealed by blurring). A sample of [phi](B) is shown in Figure 4.

In the case of normal compression, the proposed metric almost depends on the ringing measure R. For extremely compressed image, our metric depends on the second term. Considering that the HVS perception is in a limited range, we set [omega](B) to a constant function [omega]; that is, [omega](B) = [omega] (thus the blurring term is the scaled psychometric function). This is used for the compensation in extreme blur. The constant function [omega] and the threshold [B.sub.th] are determined in a small-scale (twenty-image) experiment as described in the next section.

3. Performance Results

In this section, the performance of the proposed metric is tested on three databases. Additionally, two auxiliary experiments are conducted to show the properties of the ringing measure and the blurring measure individually.

In the anisotropic diffusion, the step parameter [lambda] should be in [0 1/4] to keep stable [30], and we set [lambda] = 0.2 for rapidity. With the specified K in the diffusion function and the parameter [lambda], it is appropriate to set the total iteration number N to the interval [5 10].In this interval, the ringing artifacts can be generally removed and the image structures are persisted. We set it roughly to 8 in this interval. However, it is not very strict for N, as long as it is in this interval. The index [beta] in the indicating function (16) is set to 4 as usually used in a psychometric function.

The constant [omega] and the threshold [B.sub.th] are determined in a twenty-image experiment. The first twenty images from CSIQ database [40] are used. It should be noted that the first twenty images are independent in terms of the scenes and these images are not used in the experiments for performance tests. According to the small-scale experiment, the constant [omega] and the threshold [B.sub.th] are set to 18 and 8, respectively.

3.1. Testing Set. The involved databases are LIVE [41], TID2008 [42], and CSIQ [40]. They are widely used in the researches of image quality assessment.

The LIVE database consists of 29 reference images. These images are distorted using five different distortion types: JPEG2000, JPEG, Gaussian blur in RGB components, white noise in the RGB components, and bit errors in the JPEG2000 bitstream when transmitted over a simulated fast-fading Rayleigh channel. There are 227 JPEG2000 images in this database. Each image was rated by about 20-29 subjects. The subjects rated the images on a continuous linear scale which was divided into five different regions, namely, "bad" "poor" "fair" "good" and "excellent"

The TID2008 database consists of 25 reference images and 1700 distorted images. The images are distorted by 17 types of distortions. As one type of them, the JPEG2000 subdatabase contains 100 images. The subjective tests were conducted using a pair-comparing manner. A reference image at the bottom and a pair of distorted images were simultaneously presented to the subjects. The subjects selected a distorted image that differed less from the reference image. The subjects were preliminarily instructed and trained on a set of distorted images before carrying out the actual experiments. The experiments were carried out by a total of 838 observers from three countries.

The CSIQ database consists of 30 reference images distorted using six types of distortions at four or five different levels. The distortions used in CSIQ are JPEG compression, JPEG2000 compression, global contrast decrements, additive pink Gaussian noise, additive white Gaussian noise, and Gaussian blurring. It contains 150 JPEG2000 images. CSIQ images were subjectively rated based on a linear displacement of the images across four calibrated LCD monitors placed side by side with equal viewing distance to the observer. All of the distorted versions of an original image were viewed simultaneously on the monitor array and placed in relation to one another according to the overall quality. Across-image ratings were realigned according to a separate experiment in which observers place subsets of all the images linearly in space. The database contains 5000 subjective ratings from 25 observers, and the ratings were reported in the form of DMOS.

These databases were chosen because of the diversity of the procedures of the subjective evaluations. In detail, the distorted image/images were presented singlewise, pairwise, and image setwise, for LIVE database, TID2008 database, and CSIQ database, respectively. In our experiment, the JPEG2000 subsets of these databases were used.

3.2. Correlations for Comparison. In order to evaluate the correlations between objective metrics and their MOS of the used databases, the authors followed the suggestions of the VQEG report [43]. Firstly, a 4-parameter logistic fitting between the objective and the subjective metrics is adopted,

[MOS.sub.i] = [[beta].sub.1] - [[beta].sub.2]/1 + exp(([M.sub.i] - [[beta].sub.3])/[absolute value of [[beta].sub.4]]) + [[beta].sub.2], (17)

where [[beta].sub.1], [[beta].sub.2], [[beta].sub.3], and [[beta].sub.4] are model parameters and [MOS.sub.i]/[M.sub.i] denotes the subjective/objective metric of the ith image.

The parameters are obtained by optimizing the fitting. Figure 5 shows a sampling fitting curve of the proposed metric on CSIQ JPEG2000 images. The predicted MOS are derived from the fitting parameters. And they are used to evaluate the performance of the metrics. As suggested in [43], the Spearman correlation coefficient (SPCC), Pearson correlation coefficient (PCC), root mean squared error (RMSE), mean absolute prediction error (MAE), and outlier ratio (OR) are used. Note that a good metric corresponds to high SPCC and PCC but low RMSE, MAE, and OR.

3.3. Performance Result for the Proposed Metric. Tables 1, 2, and 3 show the performance of the proposed metric as well as some leading metrics such as CPBD metric [17], JNBM metric [15], local kurtosis based metric (LKM) [13], local phase coherence metric (LPC) [8], Marziliano metric [6], Laplacian metric [37], Marichal metric [38], Shaked-Tastl metric [39], BLIINDS-II metric [18], Liu metric [25], Barland metric [24], and FR metric PSNR.

It can be seen from Tables 1-3 that the proposed metric performs slightly better than or competitively with CPBD metric and BLIINDS-II metric but is significantly superior to others. It is just slightly inferior to CPBD metric on TID2008 database, but it still considerably outperforms CPBD metric on LIVE and CSIQ. The proposed metric and the BLIINDS-II metric are almost the same in terms of the performance. It should be noted that the BLIINDS-II metric is the general-purpose image quality metric that is not limited to JPEG2000 degradation. In fact, BLIINDS-II is a commendable general-purpose quality metric. However, it requires extremely huge computation because it extracts and utilizes many features, such as multiscale and multiorientation features in DCT domain in a sliding window, and works in neural network-based framework. More than ten hours are needed for assessment (only the testing process) of the JPEG2000 subset of one database, whereas only a few minutes are needed by the proposed metric and other metrics with the same computer with MATLAB. The proposed metric is designed for JPEG2000 images and highly reliable on all of the three databases. In detail, almost all SPCC and PCC of the proposed metric are higher than 0.9 on these databases, excepting the SPCC on CSIQ database that is still extremely near 0.9.

Except for Liu metric and Barland metric, all other existing metrics do not exploit the ringing effect explicitly, although they are claimed to be suitable for JPEG2000 images. Liu metric and Barland metric explicitly introduce the ringing measure. However, they did not take the structuring properties of ringing into account. In fact, the ringing measure in Liu metric and Barland metric is derived from calculating the activity of ringing region, specifically, the local variance. As a result, they are likely to confuse structures, noise, and textures. The performance results of these two metrics are not satisfactory, especially for LIVE database. However, we should note that Liu metric is actually for ringing annoyance assessment not for sharpness (or quality) assessment. Thus, the straight comparison is not fair to it. However, experiment results show that the metrics based on ringing region detection and activity-based ringing annoyance measure are not robust at all.

Some metrics, such as JNBM metric and LPC metric, do not take ringing into account. As a result, their performance is not robust. It can be seen from Table 1 that these metrics perform unsatisfactorily on LIVE database. This is because the JPEG2000 images in LIVE database are almost in ring-dominating case (not extremely compressed). This is further validated subsequently in this section.

3.4. Auxiliary Experiments for Individual Measures. Two auxiliary experiments were conducted to test the individual components: the blurring measure B and the ringing measure R. These experiments demonstrated the characteristics of the ringing measure and the blurring measure.

The results of the auxiliary experiments are shown in Figure 6. It shows the dot plots of the blurring measure, the ringing measure, and the proposed sharpness metric. The blurring measure and the ringing measure are presented as y-axis, while the MOS are presented as x-axis, which is convenient to show the behaviors of the two measures with respect to MOS. In normal compression, the ringing measure is adequate for the sharpness assessment (see the leftmost figure in Figure 6(b)). Note that the blurring measure is not reliable in this compression range due to the disturbance of the ringing artifacts. However, for extremely compressed images, the blurring is so severe that the ringing is concealed. The corresponding ringing measure decreases (see the two figures on the right in Figure 6(b)), and the blurring measure is used for compensation. The blurring measure B rises so rapidly after it exceeds about [B.sub.th] that it can be stably used as the indicating parameter.

It is shown in Figure 6(c) that the sharpness metric derived from B and R achieves the advantages of both of them. The ringing measure R performs well in ring-dominating case, and it goes backwards in blur-dominating case. Fortunately, the blurring measure B can be used as the indication of the likelihood of the two cases, because its value is quite large in blur-dominating case. Compensated by the blurring measure, the proposed metric becomes much more monotonous.

What should be noted is that the ringing measure R performs well itself on LIVE database (see the leftmost figure of Figure 6(b)). This is because all images in the database are in normal compression. It validates the accuracy of the proposed ringing measure.

Comparison with the Blurring Metric in [6]. The proposed metric adopts the modified blurring metric in [6] as the secondary parameter which is used in the indicating function. The proposed ringing measure dominates when blurring B < [B.sub.th] [approximately equal to] 8 (note that this is a considerably wide range). We compare the proposed metric with the blurring metric in [6]. The performance is listed in Tables 1-3 and the dot plots of metric in [6] are shown in Figure 7. The dot plots of the proposed metric have been shown in Figure 6(c). From Figure 6(c) and Figure 7, we can see that the proposed metric is much monotonous than the metric in [6] (noting that in Figure 7 the metrics are presented as x-axis rather than y-axis in Figure 6), which mainly profits from the monotonousness of the proposed ringing measure in ring-dominating case.

4. Conclusions

An NR image sharpness metric for JPEG2000 compressed images is proposed in this paper. The metric mainly utilizes a structuring ringing measure. In the case of extreme blurring, a blurring measure is used for compensation. One major contribution of this paper is the ringing detection which involves the anisotropic diffusion and a refining phase that uses the prior ringing structures and the HVS characteristics. We show that the ringing term is sufficiently monotonous along with the perceptual sharpness (quality) of JPEG2000 images if the blurring is not so critical that ringing artifacts are concealed. In fact, the ringing detection method is quite effective (see Figure 3). The highly visible ringing artifacts are detected while small image structures (or noise) are discarded.

The proposed metric is tested on three widely used databases. And quite a few existing leading metrics are tested for comparison. The experimental results show that the proposed metric is superior or at least competitive to the existing metrics.

One future direction is employing salience measure to directly take place of the location weight of the proposed ringing measure. Then, the artifacts in the background can be distinguished more clearly.

http://dx.doi.org/10.1155/2014/295615

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

[1] L. J. Karam, T. Ebrahimi, S. S. Hemami et al., "Introduction to the issue on visual media quality assessment," IEEE Journal on Selected Topics in Signal Processing, vol. 3, no. 2, pp. 189-192, 2009.

[2] N. Ahmed, T. Natarajan, and K. P. Rao, "Discrete cosine transform," IEEE Transactions on Computers C, vol. 23, no. 1, pp. 90-93, 1974.

[3] M. Rabbani and R. Joshi, "An overview of the JPEG 2000 still image compression standard," Signal Processing: Image Communication, vol. 17, no. 1, pp. 3-48, 2002.

[4] D. S. Taubman and M. W. Marcellin, JPEG2000: Image Compression Fundamentals, Standards and Practice, Kluwer Academic, New York, NY, USA, 2002.

[5] S. G. Mallat, "Multifrequency channel decompositions of images and wavelet models," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 12, pp. 2091-2110, 1989.

[6] P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, "Perceptual blur and ringing metrics: application to JPEG2000," Signal Processing: Image Communication, vol. 19, no. 2, pp. 163-172, 2004.

[7] Z. Wang and E. P. Simoncelli, "Local phase coherence and the perception of blur," in Proceedings of the Advances in Neural Information Processing Systems Conferences, vol. 16, pp. 786-792, Vancouver, Canada, 2004.

[8] R. Hassen, Z. Wang, and M. Salama, "No-reference image sharpness assessment based on local phase coherence measurement," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '10), pp. 2434-2437, March 2010.

[9] R. Hassen, Z. Wang, and M. Salama, "A flexible framework for local phase coherence computation," in Proceedings of the 8th International Conference on Image Analysis and Recognition (ICIAR '11), pp. 40-49, Burnaby, Canada, 2011.

[10] G. Blanchet, L. Moisan, and B. Rouge, "Measuring the global phase coherence of an image," in Proceedings of the IEEE International Conference on Image Processing (ICIP '08), pp. 1176-1179, October 2008.

[11] G. Blanchet and L. Moisan, "An explicit sharpness index related to global phase coherence," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '12), pp. 1065-1068, March 2012.

[12] J. Caviedes and S. Gurbuz, "No-reference sharpness metric based on local edge kurtosis," in Proceedings of the International Conference on Image Processing (ICIP '02), vol. 3, pp. 53-56, September 2002.

[13] J. Caviedes and F. Oberti, "A new sharpness metric based on local kurtosis, edge and energy information," Signal Processing: Image Communication, vol. 19, no. 2, pp. 147-161, 2004.

[14] R. Ferzli, L. J. Karam, and J. Caviedes, "A robust image sharpness metric based on kurtosis measurement of wavelet coefficients," in Proceedings of 1st International Workshop on Video Process. Quality Metrics for Consumer Electronics, 2005.

[15] R. Ferzli and L. J. Karam, "A no-reference objective image sharpness metric based on the notion of Just Noticeable Blur (JNB)," IEEE Transactions on Image Processing, vol. 18, no. 4, pp. 717-728, 2009.

[16] E. P. Ong, W. S. Lin, Z. K. Lu, S. S. Yao, X. K. Yang, and L. F. Jiang, "A No-reference quality metric for measuring image blur," in Proceedings of the 7th International Symposium on Signal Processing and Its Applications, pp. 469-472, 2003.

[17] N. D. Narvekar and L. J. Karam, "A No-reference image blur metric based on the cumulative probability of blur detection (CPBD)," IEEE Transactions on Image Processing, vol. 20, no. 9, pp. 2678-2683, 2011.

[18] M. A. Saad, A. C. Bovik, and C. Charrier, "Blind image quality assessment: a natural scene statistics approach in the DCT domain," IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3339-3352, 2012.

[19] H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, "Blind quality assessment for JPEG2000 compressed images," in Proceedings of the 36th Asilomar Conference on Signals Systems and Computers, vol. 2, pp. 1735-1739, Pacific Grove, Calif, USA, November 2002.

[20] Z. M. P. Sazzad, Y. Kawayoke, and Y. Horita, "Spatial features based no reference image quality assessment for JPEG2000," in Proceedings of the 14th IEEE International Conference on Image Processing (ICIP '07), vol. 3, pp. 517-520, September 2007

[21] H. Liu, J. Redi, H. Alers, R. Zunino, and I. Heynderickx, "No-reference image quality assessment based on localized gradient statistics: application to JPEG and JPEG2000," in Proceedings of the SPIE-IS&T Electronic Imaging, vol. 7527, pp. 1F-1-1F-9, January 2010.

[22] H. R. Sheikh, A. C. Bovik, and L. Cormack, "No-reference quality assessment using natural scene statistics: JPEG2000," IEEE Transactions on Image Processing, vol. 14, no. 11, pp. 1918-1927, 2005.

[23] H. Tong, M. Li, H. Zhang, and C. Zhang, "No-reference quality assessment for JPEG2000 compressed images," in Proceedings of the International Conference on Image Processing (ICIP '04), pp. 3539-3542, Singapore, October 2004.

[24] R. Barland and A. Saadane, "Reference free quality metric for JPEG-2000 compressed images," in Proceedings of the 8th International Symposium on Signal Processing and its Applications (ISSPA '05), pp. 351-354, August 2005.

[25] H. Liu, N. Klomp, and I. Heynderickx, "A no-reference metric for perceived ringing artifacts in images," IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 4, pp. 529-539, 2010.

[26] S. H. Oguz, Y. H. Hu, and T. Q. Nguyen, "Image coding ringing artifact reduction using morphological post-filtering," in Proceedings of the IEEE 2nd Workshop Multimed. Signal Processing, pp. 628-633, 1998.

[27] S. Ye, Q. Sun, and E. Chang, "Edge directed filter based error concealment for wavelet-based images," in Proceedings of the International Conference on Image Processing (ICIP '04), vol. 2, pp. 809-812, October 2004.

[28] V. Khryashchev, I. Apalkov, and L. Shmaglit, "A novel smart bilateral filter for ringing artifacts removal in JPEG2000 images," in Proceedings of the 20th International Conference on Computer Graphics and Vision (GraphiCon '10), pp. 122-128, St. Petersburg, Russia, September 2010.

[29] C. Tomasi and R. Manduchi, "Bilateral filtering for gray and color images," in Proceedings of the 1998 IEEE 6th International Conference on Computer Vision, pp. 839-846, January 1998.

[30] P. Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, 1990.

[31] X. Li, "Blind image quality assessment," in Proceedings of the International Conference on Image Processing (ICIP '02), vol. 1, pp. 449-452, September 2002.

[32] F. Voci, S. Eiho, N. Sugimoto, and H. Sekiguchi, "Estimating the gradient threshold in the Perona-Malik equation," IEEE Signal Processing Magazine, vol. 21, no. 3, pp. 39-65, 2004.

[33] L. A. Olzak and J. P. Thomas, "Seeing spatial patterns," in Handbook of Perception and Human Performance, K. Boff, L. Kaufman, and J. Thomas, Eds., Wiley, New York, NY, USA, 1986.

[34] F. W. Campbell and J. G. Robson, "Application of Fourier analysis to the visibility of gratings," Journal of Physiology, vol. 197, no. 3, pp. 551-566, 1968.

[35] L. S. Pedrotti, "Basic physical optics," in Fundamentals of Photonics, B. E. A. Saleh and M. C. Teich, Eds., pp. 152-154, John Wiley & Sons, Hoboken, NJ, USA, 2001.

[36] E. H. Weber, D. J. Murray, and H. E. Ross, E.H. Weber on the Tactile Senses, Erlbaum (UK) Taylor, Francis, Hove, UK, 2nd edition, 1996.

[37] C. F. Batten, Autofocusing and astigmatism correction in the scanning electron microscope [M. Phil. thesis], University of Cambridge, Cambridge, UK, 2000.

[38] X. Marichal, W. Ma, and H. Zhang, "Blur determination in the compressed domain using DCT information," in Proceedings of the International Conference on Image Processing (ICIP '99), vol. 2, pp. 386-390, October 1999.

[39] D. Shaked and I. Tastl, "Sharpness measure: towards automatic image enhancement," in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 1, pp. 937-940, September 2005.

[40] E. C. Larson and D. M. Chandler, "Most apparent distortion: full-reference image quality assessment and the role of strategy," Journal of Electronic Imaging, vol. 19, no. 1, Article ID 011006, pp. 1-21, 2010.

[41] H. R. Sheikh, A. C. Bovik, L. Cormack, and Z. Wang, "LIVE image quality assessment database," 2003, http://live.ece.utexas .edu/research/quality.

[42] N. Ponomarenko, V. Lukin, K. Egiazarian, J. Astola, M. Carli, and F. Battisti, "Color image database for evaluation of image quality metrics," in Proceedings of the IEEE 10th Workshop on Multimedia Signal Processing (MMSP '08), pp. 403-408, October 2008.

[43] VQEG, "Final Report From the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment," 2000, http://www.vqeg.org/.

Zhipeng Cao, Zhenzhong Wei, and Guangjun Zhang

Key Laboratory of Precise Opto-Mechatronics Technology, Ministry of Education, School of Instrumentation Science and Opto-Electronics Engineering, Beihang University, Beijing 100191, China

Correspondence should be addressed to Zhenzhong Wei; zhenzhongwei@buaa.edu.cn

Received 8 January 2014; Revised 4 June 2014; Accepted 5 June 2014; Published 24 June 2014

Academic Editor: Michael A. Fiddy

Table 1: Evaluation of the proposed metric for the LIVE database.

                         SPCC     PCC      RMSE       MAE       OR

CPBD metric [17]        0.8605   0.8505   12.947    10.2715   3.0837
JNBM metric [15]        0.6558   0.6337   24.6062   21.1012     0
LKM [13]                0.8173   0.8122   14.3599   11.3994   4.4053
LPC [8]                 0.3474   0.3797   22.7693   19.1612   0.4404
Marziliano metric [6]   0.7118   0.7256   16.9371   13.1940   3.5242
Laplacian [37]          0.6034   0.5943   19.7948   16.0403   2.6432
Marichal [38]           0.6284   0.6399   18.9148   15.3166   2.6432
Shaked-Tastl [39]       0.4837   0.5754   20.1308   16.5230   1.7621
BLIINDS-II [18]         0.8920   0.8862   11.4050   8.4898    4.8458
Liu metric [25]         0.1679   0.1661   24.5791   21.0734     0
Barland metric [24]     0.2363   0.2456   23.8595   20.5206     0
PSNR                    0.8318   0.8267   13.8477   11.4713   4.4053
Proposed metric         0.9042   0.9060   10.4212   8.5200    3.9648

Table 2: Evaluation of the proposed metric for the TID2008 database.

                         SPCC     PCC      RMSE       MAE       OR

CPBD metric [17]        0.9286   0.9262   0.7509    0.5777      4
JNBM metric [15]        0.8258   0.831     1.108    0.8346      5
LKM [13]                0.6895   0.7368   1.3466    0.9820      5
LPC [8]                 0.7660   0.8096   1.1691    0.9115      4
Marziliano [6]          0.8784   0.8745   0.9661    0.7037      7
Laplacian [37]          0.8572   0.8604   1.0151    0.8238      3
Marichal [38]           0.8908   0.8927   0.8975    0.7323      3
Shaked-Tastl [39]       0.8223   0.8001   1.1948    0.9180      4
BLIINDS-II [18]         0.8768   0.9024   0.8582    0.6467      6
Liu metric [25]         0.5801   0.7031   1.4162    1.1169      4
Barland metric [24]     0.7078   0.6293   1.5563    1.1198      0
PSNR                    0.7936   0.8176   1.1466    0.9472      1
Proposed metric         0.9160   0.9147   0.8053    0.6116      5

Table 3: Evaluation of the proposed metric for the CSIQ database.

                         SPCC     PCC      RMSE       MAE       OR

CPBD metric [17]        0.8533   0.8799   0.1522    0.1132    6.6667
JNBM metric [15]        0.7462   0.7886   0.1969    0.1506      4
LKM [13]                0.8096   0.8612   0.1628    0.1269      4
LPC [8]                 0.6611   0.7717   0.2037    0.1603    5.3333
Marziliano [6]          07854    0.8264   0.1804    0.1339    6.6667
Laplacian [37]          0.7921   0.8194   0.1836    0.1398      6
Marichal [38]           0.7760   0.8330   0.1772    0.1341    5.3333
Shaked-Tastl [39]       0.7103   0.7875   0.1947    0.1541      6
BLIINDS-II [18]         0.8951   0.9145   0.1296    0.0907    4.6667
Liu metric [25]         0.3608   0.6862    0.233    0.1911      2
Barland metric [24]     0.5706   0.7018   0.2282    0.1766      4
PSNR                    0.8961   0.8972   0.1414    0.0993    9.3333
Proposed metric         0.8936   0.9190   0.1263    0.0933    4.6667
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Cao, Zhipeng; Wei, Zhenzhong; Zhang, Guangjun
Publication:Advances in Optical Technologies
Date:Jan 1, 2014
Words:7406
Previous Article:All-optical logic gates: designs, classification, and comparison.
Next Article:Implementation of a one-LLID-per-queue DBA algorithm in EPON.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters