Printer Friendly

Automatic detection of diabetic retinopathy based on color segmentation.


Diabetes mellitus is becoming a global epidemic. The report given by IDF Atlas Fifth Edition is that there were about 8.3 percent people living with diabetes in 2011, which may increase by 54 percent by the year 2030. Diabetes increases the risk of eye diseases, but the main cause of blindness associated with diabetes is diabetic retinopathy (DR).

DR is on the priority list of eye conditions which can be partly prevented and treated. DR is characterized by multiple lesions seen on the retinal surface of patient's eye having diabetes for a prolonged time. It is due to the damage caused to the small blood vessels located in retina. Associated vision loss can be due to the following causes:

* Macula, a small region and a part of the retina which helps to identify colors and fine details gets swelled because of the leakage of blood fluids leading to blurred vision.

* New blood vessels are formed due to the block in blood vessels. These delicate, abnormal blood vessels can outflow blood into the back of the eye and blocks the light to reach the vision spot. Diabetic retinopathy is classified into two types:

Non-proliferative diabetic retinopathy (NPDR) is the early state of the disease in which symptoms will be minor or imaginary. The symptoms of NPDR are: (1) Micro aneurysms MA- weakened blood vessels leading to small blow-out swellings at their walls (2)Exudates -Blood fluids like protein, lipids leaks into the macula from damaged blood vessels (3) Hemorrhages -Blood leak from ruptured blood vessels.

Proliferative diabetic retinopathy (PDR) is the progressive stage of the disease. At this stage, oxygen supply to the other parts of the eye is very poor due to the damaged blood vessels leading to the formation of new delicate blood vessels. These delicate vessels may get ruptured easily and leaks blood into the vitreous humor, resulting in the loss of visual perception. The blood which oozes into the macula will be formed as a tissue called scar, which blocks the light entering into the eye to reach vision spot. PDR besides complicating visual perception also leads to other complications like Retina detaching because of the scar tissue formation and development of intra ocular pressure. If left untreated, the patient will get affected by vision loss and even blindness.

Many methods were proposed to detect DR. Anam Tariq, proposed an automated system for detection of NPDR stages using coloured retinal images. Background region and noisy pixels were removed for better detection of DR. Candidate detection is carried out by Gabor filter bank to enhance the dark regions. Sopharak et al (2013), proposed a real-time and simple hybrid approach to detect MA. He used non dilated retinal image as an input image. A set of optimally altered mathematical morphology is used for the detection of candidate Mas and naive Bayes classifier is used for classification. Anderson Rocha (2012), detected the bright and dark lesions by introducing the visual word dictionary approach, which applies points of interests and visual lesion dictionary. Carla Agurto (2012), presented an automatic system to detect the lesions which appears on the macula. The texture features were extracted by AM-FM features from different frequency scales and PLS is used for classification. Cemal Kose et al (2012), proposed an inverse automatic approach for diagnosis. The texture of the lesions will vary from normal eye since it is the degeneration of the retinal regions, segmentation is based on the calculation of the mean and other features and is done by naive Bayes method. Mookiah et al (2013), proposed an advanced system for DR detection. He has used three classifiers and the results were optimised by genetic algorithm to get better results.

2. Proposed Work:

This section includes the brief explanation of the different stages of the automatic diagnosis of DR and the flow of the work is given in fig 2.1 _


2.1 Data Acquisition:

The input image is a fundus image of the retina which is acquired from the standard database DIARETDB0 and DIARETDB1. The problems and issues related to the database are discussed from medical, image processing, and security perspectives. An evaluation methodology is proposed and a prototype image database with the ground truth is described in the above database. The database is made publicly available for benchmarking diagnosis algorithms.

2.2 Preprocessing:

Preprocessing is the process of the correction of the image and the elimination of the unwanted noise introduced during the capture of image. It commonly involves removing low-frequency background noise, normalizing the intensity of individual pixels of images, removing reflections and masking portions of images. Image preprocessing is the technique of enhancing data images prior to computational processing. In the proposed work, preprocessing is carried out for illumination and shading correction. The image undergoes the following steps during preprocessing:


RGB image is converted to Lab color space and the luminance channel L is subjected to Wiener filtering using a 5x5 filter mask, contrast enhancement is carried out by histogram equalization technique. The processed L channel is then combined with the chrominance channels ('a' and 'b') and converted back to RGB color space. Gray level shading correction is to remove the uneven illumination present within the retinal image.

2.3 Color based Segmentation:

Output of preprocessing is given as the input of segmentation module. In this project K-means clustering is used as segmentation algorithm. Clustering is the process of separating a group of data points into a small number of clusters. Of course this is a qualitative kind of partitioning. A quantitative approach is to measure the certain features of an image with same feature. In general, we have n data points xi, i=1...n that have to be partitioned into k clusters. The goal is to assign a cluster to each data point.


K-means is a clustering method, in which the image is clustered into the N number of clusters which needs to be specified. Consider the positions xi, i=1...k of the clusters that minimize the distance from the data points to the cluster. K-means clustering solves

J= [[SIGMA].sup.k.sub.i=1][[SIGMA].sub.x[member of]Ci [[absolute value of x-[[bar.x].sub.t]].sup.2] (1)

where x is the point of the data set, [[bar.x].sub.t].is the centroid of the /-th cluster. The K-means clustering uses the square of the Euclidean distance for clustering.

The algorithm is composed of the following steps:

1. Specify the cluster number and based on the cluster number the centroids will be initialized randomly.

2. Calculate the distance (Euclidian distance method) between the centroid and the pixel which is not clustered.

3. The pixel having minimum distance will be grouped in the kth cluster.

The optimal centroids will be calculated by repeating the steps 2 and 3. The iteration will be continued until it converges, that is the difference between the i th and (i-1) th iteration is very low. This gives a separation between the clusters.

2.4 Feature Extraction:

Feature extraction is the process of selecting the features which is used for classification purpose. The features that are used for the refinement are selected which decreases the number of features that are given to the classification process, so that the computation difficulty will be reduced.


Efficiency of the system is highly depends upon the selected features which makes the feature extraction stage highly problematic. Output is the set of features called as feature vector, which represents the image. Since the input of the classifier is an object not the image, the mathematical measurement of the image is extracted so as to feed the classifier.

If A is the image, and the selected features are [mu](image) = (average gray-values) and n(image) = (number of pixels), then the associated feature vector will be v(A) = ([mu](A), n(A)).

An image is masked so that the only non-zero pixels are the gray-scale values in the object to be classified, statistical measurements of the object can help in classification.

A m-file stature is written to compute those texture statistics. It outputs a vector T containing the following measurements:

* T(1): average gray-scale value (which is the first moment of the texture).

* T(2): average contrast (the standard deviation a. which is the square root of the second moment).

* T(3): smoothness measure (R = 1 1/(1+[[sigma].sup.2]))

* T(4): skewness (the third moment).

* T(5): uniformity measure (the sum of the squared relative frequencies pi of the grayscale values; maximum when image is constant).

* T(6): entropy (measures predictability, is zero when image is constant and goes up from there; sum of pi log2 pi).

There are two types of features:

Shape: It is the region having geometrical shapes and it extracted by capturing the boundary regions and interior regions.

Texture: Features were selected based on the color and intensity of a pixel.

Mean and Entropy are features extracted from the clustered output. Mean is the calculation of the average of the pixel intensities.

[[mu].sub.i] =1/N [[SIGMA].sup.N.sub.j=1] f(i,j) (l)

Entropy is a statistical measure of randomness that can be used to characterize the texture of the input image.

E =-[[SIGMA].sub.i] [[SIGMA].sub.j]f(i,f)log f(i,j) (2)

2.5 Classification:

Classification is the process of analyzing the various numerical properties of image features and categorizing them into various declared classes. Support Vector Machine is used for the classification purpose is a nonlinear classifier which is able to classify the features into two classes. The features were separated into classes by introducing a nonlinear plane. The main objective of SVM is to have a maximum seperability between the feature vector and the plane to avoid the misclassification. It minimizes the structural risk and able to classify the new features correctly. The boundary separation is highly optimized to maintain the accuracy.

Suppose if [xi, yi], i = 1 :N are the N observation (or patterns), xi is the ith input and yi is the corresponding pattern label, for the two class pattern classification problem, c+ and c_ are the centroids of the two classes, the classifier response is given by


The hyper plane which is optimal in separating the data points into two classes will satisfy the condition,

margin =2/[parrallel]w[parrallel]

3. Result Analysis:

In this section, the result of the stages involved in the automatic diagnosis of with varying fundus input image are discussed. Starting with a brief review of each step of the processes involved in diagnosis of DR, followed by its output.

3.1 Output of a normal retinal image:

The output of various stages of the proposed work for normal retinal image is shown in fig 3.1.


3.2 Output of retinal images affected by DR:

The retinal fundus image of a patient affected by DR is given as input to the work with exudate as symptoms. The output images were given in fig 3.2.

4. Conclusion and Future Scope:

Diabetes retinopathy and glaucoma are the leading cause of blindness in the world. This project can be utilized to detect whether the patient's retinal fundus image is affected by DR or not effectively. The input image is preprocessed for illumination correction and contrast enhancement, and then preprocessed input is segmented using color based segmentation (K- means algorithm). Features like mean and entropy were extracted from the clustered images and given as input to the classifier. SVM algorithm is used for classification, which tries to maximize the seperability between the two classes which are class 1 diseased and class 2 no disease.

The future scope of this work is to improvise the segmentation process and to classify the disease based on different stages of DR.



Article history:

Received 12 October 2014

Received in revised form 26 December 2014

Accepted 1 January 2015

Available online 25 February 2015


Akara Sopharak, Bunyarit Uyyanonvara, Sarah Barman, 2013. "Simple hybrid method for fine micro aneurysm detection from non-dilated diabetic retinopathy retinal images" Elsevier on Computerized Medical Imaging and Graphics, 37: 394-402.

Anam Tariq, M. Usman Akram and M. Younus Javed, "Computer Aided Diagnostic System for Grading of Diabetic Retinopathy" 2013 Fourth International Workshop on Computational Intelligence in Medical Imaging (CIMI).

Anderson Rocha, Tiago Carvalho, Herbert F. Jelinek, Siome Goldenstein and Jacques Wainer, 2012. "Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection" IEEE transactions on biomedical engineering, 59(8).

Carla Agurto, Honggang Yu, Victor Murray, Marios S. Pattichis, Simon Barriga, Peter Soliz, 2012. "Detection of Hard Exudates and Red Lesions in the Macula Using a Multi scale Approach" IEEE journal, 9781-4673-1830-3/12.

Cemal Kosea, Ugur S., Evik, Cevat Ikibas, A. Hidayet Erdol, 2012. "Simple methods for segmentation and measurement of diabetic retinopathy lesions in retinal fundus images" Elsevier journal on computer methods and programs in biomedicine, 107: 274-293.

International Diabetes Federation, 2011. IDF World Atlas Fifth Edition, 'The Global Burden', available at:

Mookiah, M.R.K., U. Rajendra Acharya, Roshan Joy Martis, Chua Kuang Chua, C.M. Lim, E.Y.K. Ng, Augustinus Laude, 2013. "Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach" Elsevier journal of Knowledge-Based Systems, 39: 9-22.

Rafael C. Gonzalez, Richard E. Woods "Digital image processing"

(1) Manju, A. and (2) Kamalapriya, D.

(1) Professor, Department of EEE, SKP Engineering College

(2) M.E Student, Department of EEE, SKP Engineering College

Corresponding Author: Manju, A., Professor, Department of EEE, SKP Engineering College

COPYRIGHT 2015 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:support vector machine
Author:Manju, A.; Kamalapriya, D.
Publication:Advances in Natural and Applied Sciences
Article Type:Report
Date:Jun 1, 2015
Previous Article:Road map approach for sensor network assisted navigation in dynamic environments.
Next Article:A novel medical support system for the social ecology of cervical cancer: a research to resolve the challenges in pap smear screening and prediction...

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters