Printer Friendly

Generic image quality assessment of portable fundus camera photographs.


In retinal photography, Ophthalmological early detection prevents both vision mutilation and the consequences of untreated eye disease. Non-mydriatic ocular fundus photography seems to be a hopeful solution, especially for retinal disease, when combined with telemedicine because it does not require pupil dilation and it can be done with a portable fundus camera. Portable digital fundus photography differs from conventional fundus photography because it is done with the camera fixed to operator's hands rather than being on a permanent fixture. However, such operating conditions can be vulnerable to problems with the quality of the digital retinal image, such as uneven luminance, fluctuations in focus, and patients' movements. Hence, evaluating the image quality of portable fundus camera imaging-systems is of such great importance. The valuation of fundus image quailty involves a computer-aided retinal image analysis-system that is designed to assist ophthalmologists to detect eye diseases such as age-related macular degeneration glaucoma and diabetic retinopathy. The objective quality evaluation of fundus images, which plays a major role in automatically selecting diagnosis-accessible fundus images among the outputs of digital fundus photography, is a descendant of subjective quality evaluation. Subjective quality evaluation is performed by qualified ophthalmologists who grade the quality of fundus images by comparing differences in the images to be graded with excellent quality images, based on their prior knowledge of excellent image quality. Such prior knowledge is acquired either from the human visual system (HVS), which is a complex biological system [8], or from technical training in ophthalmic analysis. Based on their prior knowledge, ophthalmological experts can grade fundus image quality with confidence; however, their subjective evaluation is as laborious in practice as it is expensive and time-consuming.

Research related to the objective consideration of fundus image quality has been conducted for decades. The methods proposed in these studies can be classified into two major categories: generic feature-based methods [1]-[3]. Generic feature-based methods deal with global distortions, such as uneven illumination, blurring effects from being out of focus, and low contrast. Lee and Wang [4] presented an explicit template that was mathematically approximated by a Gaussian model to extract images of desired quality from a set of images. Convolution of the template with the intensity histogram of a retinal image was computed as generic quality. Fasih et al. [5] developed a generic retinal image quality estimation system that employed just noticeable blur (JNB), an HVS characteristic [6], combined with texture features [7].

Retinal image quality assessment apply low-level HVS characteristics to generic quality assessment and propose an integrated HVS-based generic quality assessment algorithm as a starting point. Generic quality involves three parameters: illumination, contrast, blur and color distortion[20]. Three low-level characteristics of the HVS, including maximum likelihood estimation, blind deconvolution, linear transformation, were employed to evaluate the three parameters from this respectively.

The rest of the paper describes the proposed algorithm and presents the results of tests of the algorithm on proprietary and public datasets, and compares these results with ophthalmologists' subjective evaluations.

Proposed Methodology:

The proposed method consists of Human visual system(HVS),which is based on three characteristics: maximum likelihood estimation, blind deconvolution ,linear transformation. The performance of the method is improved when compared to the traditional methods. The input image is the retinal image which is under test.


* The major methodological steps presented here are outlined in the flowchart.

* The irrelevant background was removed in the preprocessing step using canny edge detection.

* The proposed HVS-based algorithm focuses on the three sub-models :maximum likelihood estimation method,blind deconvolution and dehazing using linear transformation.

* The machine learning step was devoted to evaluating the algorithm's capability of binary classification of images which is done by support vector machine(SVM).

A. Preprocessing:

Preprocessing was designed to trim redundant background from the original images. In order to obtain the trimming mask, we combine boundary detection and background thresholding. The boundary detection was aimed at detecting the furthest edge using canny edge detection inside the foreground area and drawing a circular mask, with the center at the image center and the radius at the furthest edge. The background area was cropped and removed by this circular mask.

B. Maximum likelihood estimation method(MLE):

The Maximum likelihood estimation method was employed for non uniform illumination and contrast. Nonuniform illumination affects the overall contrast of the image[8-11]. The contast can be enhanced by various method which have time consuming and complex algorithm[12-14].This method shows improvement in all the quality metrics when compared with original image . Input image is first decomposed into three channels (red, green and blue).Then histogram stretching is performed on individual channels. With the assumption that each of R, G and B channel is Rayleigh distributed. , histogram stretching is done with respect to Rayleigh distribution


* The scale parameter is estimated from given image. So the histogram stretching is adaptive. The scale parameter ([theta]) is estimated.

* The estimated values are used in histogram stretching with respect to Rayleigh distribution of R, G and B components.

* The image is non-uniformly illuminated because there are dark and bright patches in the image, so stretching is performed locally on small patches of image is preferred than global stretching

* Histogram stretching performed on each color channel with respect to Rayleigh distribution is given by the following

tout = imin + [[2 * [alpha]2* ln (1pi (i))].sup.1/2]

where tout is pixel value in transformed image, imin is minimum pixel value in the transformed image, a is parameter value, and pi(i) is cumulative distribution function of pixel values of input image

* The R,G,B channels of the image is concatenate. The non uniform illumination corrected image is obtained.

C. Blind deconvolution:

The blur of the image can be treated with CPBD, adaptive histogram, JNB. But they can only enhance the edges which results in unreliable for the good quality[15-19]. Blind deconvlution work for the deblurring process which gives a beter result comparing to the traditional method. It has two approach. In the first approach it simultaneously restores the true image and point spread function. this begins by making initial estimates of the true image and PSF. The technique is cylindrical in nature. Firstly we will find the PSF estimate and it is followed by image estimate. This approach is insensitive to noise. In the second approach the maximum Likelihood estimate of parameters like PSF and covariance the PSF estimate is not unique other assumptions like size, symmetry etc. The algorithm is follows:

* Convert the input image in gray scale and define the blur kernel.

* Deconvolve with different sized psf to determine best psfsize for restoring image and have manual guess for PSF.

* Deconvolving at varying values of Disk radius and number of blind deconvolution iterations.

D. Linear Transformation Method(LTM):

When the light falls on the image, it results on image color distortion[20]. The scattering model use linear transformation estimation for free hazing the images. There are three steps according to the atmospheric scattering model. They are

* Atmospheric light estimation is performed through grayscale transformation to find . Then, the quadtree subdivision is adopted to obtain the retinal region, , and finally, the atmospheric light is obtained by calculating the average gray of the retinal region.

* A transmission map is estimated by calculating the minimum color channel of to obtain . Then, the linear transformation algorithm is used to estimate the rough transmission map, , and finally, the Gaussian blur method is used to refine the rough transmittance function to obtain .

* Image restoration with parameters and is used to recover the haze-free image based on the atmospheric scattering model.

E. Machine Learning:

SVM is an excellent classification and deterioration tool, which uses support vectors that are qualified from the training dataset to expect the testing dataset. It is a common practice to renovate the innovative feature space into a higher dimensional space using kernel functions, such as a polynomial function, or a radial based function (RBF). Rather than regarding the quality assessment of retinal images as a classification problem and solving it by SVM, we can also recommend, or select, retinal images whose generic quality is excellent

Generic overall quality assessment integrating three partial class indicators was performed by SVM-based classification which showed both high sensitivity and specificity. In contrast, SVM does not have such a hitch for it can make over data into higher dimensional spaces where the non-linearly separable problem can be resolved linearly. The SVM-based overall quality classification testing on DRIMDB, LOCAL1, and LOCAL2 achieved performance above the AUC of 0.90, 0.81, and 0.94, respectively, indicating the adaptation of the HVS-based algorithm to different pixel size.


This section illustrates the results of the quality assessment of retinal images. All these images have been graded by the three ophthalmologists, such that each retinal image has three partial quality grades and the overall quality grades: Ilumination/contast (0: acceptable/high contrast; 1: unacceptable/low contrast), Blur (0: not noticeable; 1: noticeable), color (0:acceptable ; 1: unacceptable), and Overall (0: good generic quality; 1: bad generic quality). Both the partial and overall quality assessments belong to a binary classification problem; the

Sensitivity= TP/(TP +FN)


Accuracy(AUC)=Sensitiity * Specificity

criteria commonly used in the evaluation of binary classifiers are the sensitivity and specificity defined by the number of true positives (TP), true negatives(TN), false positives (FP), and false negatives (FN)


This paper aimed to assess the generic quality of the retinal images more than ever for portable fundus camera applications in non-mydriatic ocular fundus photography. An algorithm is proposed to assess image quality using a human visual system with three partial quality factors: illumination and contrast ,blur and color distortion. The characteristics are:maximum likelihood estimation, blind deconvolution and linear transformation. The sensitivity of the distortion-specified classification was 97%,97.09% and 90% respectively.

The HVS based feature vector to predict low contrast is levelheaded and effective where the table shows the quality classifications achieved balanced performance. This feature also can detect typcal noise contamination suh as salt noise, guassian noise ,pepper noise with a pure specificity.


[1.] Marrugo, A.G., M.S. Millan, M. Sorel, J. Kotera and F. Sroubek, 2015. "Improving the blind restoration of retinal images by means of point-spread-function estimation assessment," in Proc. SPIE, 9287: 6.

[2.] Dias, J.M.P., C.M. Oliveira and L. da Silva Cruz, 2014. "Retinal image quality assessment using generic image quality indicators," Inf. Fusion, 19(1): 73-90.

[3.] Fasih, M., J.M.P. Langlois, H.B. Tahar and F. Cheriet, 2014. "Retinal image quality assessment using generic features," in Med. Imag. 2014: Comput. Aid. Diag., pp: 9035.

[4.] Lee, S.C. and Y.M. Wang, 1999. "Automatic retinal image quality assessment and enhancement," in Proc. SPIE, 3661: 1581-1590.

[5.] Fasih, M., J.M.P. Langlois, H.B. Tahar and F. Cheriet, 2014. "Retinal image quality assessment using generic features," in Med. Imag. 2014: Comput. Aid. Diag., 9035.

[6.] Narvekar, N.D. and L.J. Karam, 2011. "A no-reference image blur metric based on the Cumulative probability of blur detection (CPBD)," IEEE Trans. Image Process., 20(9): 2678-2683.

[7.] Ming-Hsuan, Y., D. Kriegman and N. Ahuja, 2002. "Detecting faces in images: A survey," IEEE Trans. Pattern Anal. Mach. Intell., 24(1): 34-58.

[8.] 2012. "Fast Localization and Segmentation of Optic Disk in Retinal Images Using Directional Matched Filtering and Level Sets" IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, 16(4) H. Yu, Member, IEEE, E. S. Barriga, Member, IEEE, C. Agurto, Student Member, IEEE, S. Echegaray, Student Member, IEEE, M. S. Pattichis, Senior Member, IEEE, W. Bauman, and P. Soliz, Member, IEEE

[9.] 2010. "Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions Xiaoyang Tan and Bill Triggs" IEEE TRANSACTIONS ON IMAGE PROCESSING, 19(6): 1635.

[10.] 2006. "Total Variation Models for Variable Lighting Face Recognition" Terrence Chen, Student Member, IEEE, Wotao Yin, Xiang Sean Zhou, Member, IEEE Computer Society, Dorin Comaniciu, Sr. Member, IEEE, and Thomas S. Huang, Fellow IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 28(9).

[11.] "An empirical study on optic disc segmentation using an active contourmodel"M. Caroline Viola Stella Marya, b, Elijah Blessing Rajsinghb, J. Kishore Kumar Jacobc, D. Anandhic, Umberto Amatod, S. Easter Selvane,*

[12.] No-Reference Quality Assessment of Contrast-Distorted Images Based on Natural Scene Statistics Yuming Fang, Kede Ma, Zhou Wang, 2015 Fellow, IEEE, Weisi Lin, Senior Member, IEEE, Zhijun Fang, Senior Member, IEEE, and Guangtao Zhai IEEE SIGNAL PROCESSING LETTERS, 22: 7.

[13.] 2009. "Comparison of an Enhanced Distorted Born Iterative Method and the Multiplicative-Regularized Contrast Source Inversion method" IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 57(8): Colin Gilmore, Student Member, IEEE, Puyan Mojabi, and Joe LoVetri, Senior Member, IEEE.

[14.] 2013. " Contrast Enhancement Techniques for Images- A Visual Analysis International Journal of Computer Applications (0975-8887) 64(17). ritika, Assistant professor, sandeep kataur, Assistant professor.

[15.] 2007. "Image deblurring based on wavelet and neural network" AI-PING YANG, ZHENG-XIN HOU, CHENG-YOU-WANG Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2-4.

[16.] 2013. "Blur and Illumination Robust Face Recognition via Set-Theoretic Characterization "Priyanka Vageeswaran, Student Member, IEEE, Kaushik Mitra, Member, IEEE, IEEE TRANSACTIONS ON IMAGE PROCESSING, 22(4) and Rama Chellappa, Fellow, IEEE

[17.] 1991. "A Neural Network for Deblurring an Image" Chris M. Jubien M.E. Jernigan Department of Electrical and Computer Engineering, University of Victoria, Victoria, B.C., Canada. IEEE Pacific Rim Conference on Communications, Computers and Signal Processing.

[18.] 2011. " Facial Deblur Inference Using Subspace Analysis for Recognition of Blurred Faces" Masashi Nishiyama, Abdenour Hadid, Hidenori Takeshima, Jamie Shotton, Member, IEEE, Tatsuo Kozakaya, and Osamu Yamaguchi IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 33(4).

[19.] " ABC optimized neural network model for image deblurring with its FPGA implementation"Slami Saadi a,, Abderrezak Guessoum a, Maamar Bettayeb b

[20.] Toet, A. and M.P. Lucassen, 2003. "A new universal colour image fidelity metric," Displays, 24(4-5): 197-207.

(1) J. Kanimozhi, (2) Dr. P. Vasuki, (3) Dr. S. Mohamed mansoo roomi, (4) Dr. J. S. Gnanasekaran, (5) P. R.sri vidhya lakshmi, (6) K.oviya

(1,2,4,5,6) K.L.N. College of Information Technology, Pottapalayam, Sivagangai, Tamil Nadu, India.

(3) Thiagarajar college of engineering, Madurai, Tamil nadu, India.

Received 28 January 2017; Accepted 22 March 2017; Available online 4 April 2017

Address For Correspondence:

J. Kanimozhi, ECE Department, K. L. N. College of Information Technology, Pottapalayam, Sivagangai, India E-mail:

Caption: Figures and Tables:

Caption: (a) Original

Caption: (b) MLE

Caption: (c) colour distorted input

Caption: (d) LTM result

Caption: (e) Blur & Noise Image

Caption: (f) deblurred image with number of iteration 18

Caption: (g) deblurred image with number of iteration 1

Caption: Fig. 1: Flowchart of the proposed method
Table 1: Feature Vector Based Performance Of Partal(Illumination/
Contrast, Blur, Color) And Overall Quality Classifier

SVM classifier   Category                  sensityvity

Illumination/    Acceptabe/high contrast   0.879 [+ or -] 0.0746
contrast         Unacceptable/low          0.809 [+ or -] 0.0546
Blur             Not noticeable            0.838 [+ or -] 0.0783
                 Noticeable                0.865 [+ or -] 0.0646
Color            Acceptable                0.826 [+ or -] 0.0795
                 Unacceptable              0.809 [+ or -] 0.0446
SVM based        Good                      0.875 [+ or -] 0.0722
overall          Bad                       0.829 [+ or -] 0.0646

SVM classifier   Category                  Specificity

Illumination/    Acceptabe/high contrast   0.979 [+ or -] 0.0746
contrast         Unacceptable/low          0.779 [+ or -] 0.0743
Blur             Not noticeable            0.879 [+ or -] 0.0846
                 Noticeable                0.679 [+ or -] 0.0446
Color            Acceptable                0.859 [+ or -] 0.0746
                 Unacceptable              0.579 [+ or -] 0.0446
SVM based        Good                      0.879 [+ or -] 0.0746
overall          Bad                       0.879 [+ or -] 0.0746

SVM classifier   Category                  AUC

Illumination/    Acceptabe/high contrast   0.939 [+ or -] 0.0746
contrast         Unacceptable/low
Blur             Not noticeable            0.922 [+ or -] 0.0746
Color            Acceptable                0.963 [+ or -] 0.0746
SVM based        Good                      0.870 [+ or -] 0.0646
overall          Bad
COPYRIGHT 2017 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kanimozhi, J.; Vasuki, P.; Roomi, S. Mohamed Mansoo; Gnanasekaran, J.S.; Lakshmi, P. R. Sri Vidhya;
Publication:Advances in Natural and Applied Sciences
Article Type:Report
Date:Apr 1, 2017
Previous Article:Detecting appearance similarity from motion similarity using ALM method.
Next Article:An overview of various image fusion techniques for remotely sensed images.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters