Printer Friendly

Intelligent texture feature extraction and indexing for MRI image retrieval using curvelet and PCA with HTF.

INTRODUCTION

In recent years the rapid growth of digital image collections, many methodological techniques for storing, searching and retrieving images has been investigated. Of late, the pragmatic caption as to the retrieval of image database and visual information has become an active research area that is on the move. The long-established approach to image retrieval is to annotate and elucidate images by text and then using text-based database management system to perform image retrieval. Nevertheless there are several drawbacks in employing keywords or text phrases associated with images to achieve and attain visual information retrieval. The maiden issue is that the process of annotating images manually is time-consuming and it is extremely difficult to describe the content of different types of images through human languages. In order to overcome the difficulties encountered by a text-based image retrieval system, CBIR was proposed in the early 1990s. In CBIR, the system can discriminate and retrieve images from the database based on their visual contents such as shapes, colors, textures and spatial relationships among objects. In such a system description of more objective low-level image can be automatically extracted by machines, subsequently employed them as indexing tools or discriminating features in image retrieval. The benefits originates from the application of CBIR approaches to texture based medical image retrieval vary from clinical decision support to medical research and education. These benefits have motivated researchers to apply CBIR systems to medical images. Specialized Content Based Medical Image Retrieval (CBMIR) systems have been developed to support the retrieval of different kinds of medical images, like X-ray images, computed tomography (CT),magnetic resonance imaging (MRI), positron emission tomography (PET), ultrasonography, etc. One of the main tasks in wavelet based image retrieval is to extract features without directional sensitivity. To overcome the missing directional selectivity of Discrete wavelet transform, a multiresolution geometric analysis(MGA),named curvelet transform was proposed. Similarity measurement is done by using mahalanobis distance measure which gives better results compare wih Euclidean distance. We show that the Curvelet with mahalanobis distance measure has provided better performance than the wavelet based feature extraction. The rest of this paper is organized as follows. In section 2, we briefly explains the related work, though section 3 and 4 describe the integrated approach using curvelet and PCA with HTF and curvelet based feature extraction through HTF. Section 5 and 6 explains the results and the conclusion.

2. Related Work:

CBIR has become one of the most active fertile and vital research areas in medical image processing. The objective of CBIR is to find on exact image that we need. Retrieving of relevant image out of a large database is a challenging and meticulous problem along with its retrieval systems has been dealt with the issue of automatic indexing and retrieval of images. On account of this, the CBIR researchers came up with retrieved images and they are based on automatically derived low level features or high level features. These low level features are most popular due to its simplicity compared to other level of features. The application of image retrieval in many areas such as fashion design, crime prevention, medicine, law and science makes this research field one of the most important and fastest growing one in information technology. Basically, the multiresolution ideas have been used in the field of image retrieval and the most acceptable multiresolution tool is the wavelet transform. During wavelet analysis, an image is generally decomposed at different scales and orientations using a wavelet basis vector. At present, all texture features used in CBIR are mostly derived from Discrete Wavelet Transform (DWT), Gabor filters and Complex Wavelet Filters. Ahmadian & Mostafa (2003) used the DWT for texture classification. The application of the DWT using generalized Gaussian density with Kullback-Leibler distance has shown to provide efficient results for texture image retrieval proposed by Do & Vetterli (2002) and image segmentation introduced by Unser (Unser, M., 1993). However, the DWT can extract only three directional (horizontal, vertical, and diagonal) information from an image. Manjunath & Ma (1996) derived texture features from the Gabor wavelet co-efficients for indexing photographic and satellite images. They evaluated the retrieval performance of Gabor wavelet, pyramid structured wavelet transform and tree structured wavelet transform. The retrieval performance of Gabor wavelet derived feature is much better than other wavelet based features. The time required for feature extraction is somewhat higher in Gabor method. To address the directional limitation, dual-tree complex wavelet filters (DT-CWFs), DT rotated CWFs proposed by Kokare et al (2005), rotational invariant complex wavelet filters introduced by Kokare et al (2006) and rotated wavelet filters by Kokare et al (2007) have been proposed for texture image retrieval. The authors Hue and Xiao (2010) proposed Gaussian Mixture Model (GMM) and Generalized Gaussian Mixture Model (GGMM) for texture based image retrieval.

In order to overcome the innate limitations of traditional multi-scale representations of wavelet a novel transform has been developed by Candes & Donoho (1999) known as curvelet transform. Candes et al (2005) proposed two new forms of curvelet transform based on different operations of Fourier samples, namely, Unequally-Spaced Fast Fourier Transform (USFFT) and wrapping based fast curvelet transform. Wrapping based curvelet transform is faster in computation time and more robust than ridgelet and USFFT based curvelet transform. The motivation for the development of the new transform was to find a way to represent edges and other singularities along curves in a way which were more efficient than existing methods, that is, less coefficients are required to reconstruct an edge to a given degree of accuracy proposed by Rowan Seymour et al (2008). Moreover, curvelet spectra cover the frequency plane of an image completely. For these significant properties, curvelet transform can be used as a powerful image feature capturing tool in CBIR. In order to improve the retrieval rate, we have proposed integrated approach based on curvelet and PCA using HTF with MD.

3. Integrated approach Using Curvelet and PCA with Mahalanobis Distance Measure using HTF:

The combined approach and access of curvelet and PCA using HTF is proposed. Curvelet will decompose an image into different scales and orientations. The approximate image coefficients are reduced by using PCA. Then the HTF's are extracted from approximate image coefficients. However approximate image only contains low frequency components. Five statistical parameters are calculated from every individual image. Distance between query and database image is calculated by using MD measure. This proposed method gives better retrieval results in comparison with other proposed methods. The proposed image retrieval system is shown in Figure 1.

4. Curvelet Based Feature Extraction Through HTF:

In this section, image retrieval using curvelet transform for feature extraction is described. Curvelet based feature extraction takes the MRI cancer images as input. The images are then decomposed into subbands in different scales and orientations. The approximate subband contains the low-frequency components and the rest captures the high-frequency details along different orientations. Then dimensionality reduction method PCA has been applied on those selected subbands to get an even lower dimensional representation. This not only reduces computational load, but also increases retrieval accuracy. This efficient dimensionality reduction tool is applied on curvelet coefficients to achieve higher retrieval rate. PCA has been employed on curvelet decomposed gallery images to form a representational basis. Further Curvelet subimages are projected onto PCA-transformed space. Then Co-occurrence matrices are calculated for all the images in the normalized database. To normalize GLCM, its values are divided by the total number of increments. Haralick et al (1973) derived 13 measures from co-occurrence matrices for solving a variety of problems. Among these measures, only five features are found to be truly useful in applications. In each set, feature vector is formed using energy, autocorrelation, homogeneity, variance and entropy of every subband. Retrieval performance from large texture database has shown that the retrieval performance of these five feature parameters was always found to be better.

Retrieval Algorithm:

This section verifies the performance of the proposed integrated approach using curvelet and PCA with HTF using MD. The best retrieval rate is obtained by using this proposed method. To realize the performance of the proposed method, the sample images are taken from a set of brain, breast, prostate and phantom images.

Implementation:

The proposed image retrieval system is implemented using MATLAB image processing tool box. In the execution, we use an Intel Core 2 Duo processor of 2.4 GHz CPU with 4GB RAM and 500GB HDD. In order to test the efficiency of our proposed work, we experiment with a MRI image from The Cancer Imaging Archive have been downloaded from world wide web (http://www.cancerimagingarchive.net), which includes about 500 images. These images have been classified in to brain, breast, prostate and phantom category each of size 125. The GUI designed in this connection has been used database addition, deletion, browsing query image and searching similar images from database.

Algorithm for database image feature extraction:

i=1
While i [less than or equal to] 500
Do Curvelet Decomposition
Do PCA
Extract Haralick Texture Features
(Energy, Entropy, Variance, Homogeneity, Autocorrelation)
Store Data
Increment i
End

Algorithm for query image feature extraction:

Get Input
Do Curvelet Decomposition
Do PCA
Extract Haralick Texture Features
(Energy, Entropy, Variance, Homogeneity, Autocorrelation)

Distance Measure:

While i [less than or equal to] 500
Energy distance = Mahalanobis (Energy Q, Energy of database images)
Entropy distance = Mahalanobis (Entropy Q, Entropy of database images)
Variance distance = Mahalanobis (Variance Q, Variance of database
images)
Homogeneity distance = Mahalanobis (Homogeneity Q, Homogeneity of
database images)
Autocorrelation distance = Mahalanobis (Autocorrelation Q,
Autocorrelation of database images)
Sort Ascending order of combined Energy, Entropy, Variance,
homogeneity, Autocorelation


Statistical Similarity Matching:

As a similarity measure we use the Mahalanobis standard which takes into account different magnitudes of different components. The similarity measure by a given query image involves searching the database for similar curvelet coefficients as the input query. Mahalanobis Distance is suitable and effective method over Euclidean distance measurement. The retrieved images are ranked by their similarities distance with the query image. Below mentioned equation shows the mahalanobis distance measurement expression, Where D is the mahalanobis distance. The computed distance is ranked according to closest similar in addition, if the distance is less than a certain threshold set, the corresponding original image is close or match the query image.

[D.sup.2] = [(x-m).sup.T] [C.sup.-1] (x-m)

Where:

x - Vector of data

m - Vector of mean values of independent variables [C.sup.-1] - Inverse covariance matrix of independent variables T - Indicates vector should be transposed Here Multirate vector x = [([x.sub.1], [x.sub.2], [x.sub.3], ...., [x.sub.N]).sup.T] and Mean m = [([m.sub.1],[m.sub.2],[m.sub.3],...., [m.sub.N]).sup.T]

Mahalanobis distance for dissimilarity measure between two random vectors [??] and [??] shown in equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Zhenyu He et al (2009) defined Precision P is the ratio of the number of retrieved relevant images r to the total number of retrieved images n, i.e., P =r/n. Precision measures the accuracy of the retrieval and it is expressed in equation.

Precision = No. of relevant images retrieved Total no. of images retrieved r/n

Zhenyu He et al (2009) defined Recall is the ratio of the number of retrieved relevant images r to the total number m of relevant images in the whole database, i.e.,R=r/m. Recall measures the robustness of the retrieval and it is expressed in equation.

Recall = No. of relevant images retrieved/Total no. of relevant images in DB r/m

5. Results and Retrieval Accuracy:

This section verifies the effectiveness and efficiency of the proposed curvelet and PCA using HTF based texture image retrieval approach is demonstrated with sample input images. Each database image is used as a sample query. For each query, twenty images which best satisfy the query is retrieved. Improved retrieval rate is obtained by using the proposed method. In order to study the performance of curvelet and PCA with HTF using Mahalanobis similarity measure, the sample images are taken from a set of MRI images.

The top twenty retrieved images for MRI brain category is depicted in Figure 6.4 (a), breast category is depicted in Figure 6.5 (a), prostate category is depicted in Figure 6.6(a) then MRI phantom category is depicted in Figure 6.7 (a). Table 6.1 shows comparison of the retrieval efficiency of the proposed approaches with the existing approach like GMM with KLD (Kullback-Leibler Distance), DWT and RCWF with CD (Canberra Distance), DT-RCWF and DT-CWF with CD. From table 6.2 it can be noted that the proposed approach has good average precision over existing approaches. Table 6.3 shows comparison of the average recall of the proposed method with existing methods like GMM with KLD, DWT and RCWF with CD, DT-RCWF and DTCWF with CD. Figure 6.2 and 6.3 reveal the average precision and recall of the proposed method and other techniques. The result of top retrieval has been achieved when the search is performed on the Curvelet and PCA with HTF using MD approach. The proposed method consistently performs better than the other methods. The main reason is that the curvelets are good at approximating curved singularities, they are fit for extracting crucial edge-based features from medical images more efficiently than wavelet transform.

6. Conclusion:

We have proposed a novel technique referred as integrated curvelet and PCA with HTF has been proposed and developed. The performance improvement of the proposed approach has been compared with DWT with RCWF using CD, GMM with KLD and DT-CWF with DT-RCWF using CD. The average shows an improvement of 92% retrieval efficiency. In the proposed work texture feature has been used for CBIR. Nevertheless, several applications, for example searching a particular disease from a medical image database, would benefit from using both texture and shape of the features concurrently. In such cases both the texture and shape play a significant role. As a result, an efficient integration of texture and shape features using curvelet should be considered in future works.

Moreover, to extract more texture features and further increase the retrieval rate, an approach integrating the global and local features would be handy for the years to come. In 3-D problems, the computational complexity of the curvelet transform is higher than that of wavelets. The theory and function of the 3-D curvelets are growing research area and it is possible that new efficient curvelet transforms will be developed in future to regulate higher retrieval performance.

ARTICLE INFO

Article history:

Received 12 October 2014

Received in revised form 26 December 2014

Accepted 1 January 2015

Available online 25 February 2015

REFERENCES

Ahmadian, A., A. Mostafa, 2003. 'An efficient texture classification algorithm using gabor wavelet', in Proc. EMBS, pp: 930-933.

Candes, Donoho, 1999. 'Curvelets' Manuscript.

Candes, E.J., L. Demanet, D.L. Donoho, L. Ying, 2005. 'Fast Discrete Curvelet Transforms', Multiscale Modeling and Simulation, 5: 861-899.

Do, M.N., M. Vetterli, 2002. 'Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance', IEEE Trans. Image Process, 11(2): 146-158.

Haralick, RM, Shanmugam, K & Dinstein, I 1973, 'Textural Features for Image Classification', IEEE Transactions on Systems, Man, and Cybernetics, 3(6): 610-621.

Hue Yuan, Xiao-ping Zhang, 2010. 'Statistical Modeling in the wavelet domain for compact feature extraction and similarity measure of images', IEEE Trans. Circuits and systems for video technology, 20(3): 439-445.

Kokare, M., P.K. Biswas, B.N. Chatterji, 2005. 'Texture image retrieval using new rotated complex wavelet filters', IEEE Trans. Syst., Man, Cybern. B, Cybern., 35(6): 1168-1178.

Kokare, M., P.K. Biswas, B.N. Chatterji, 2006. 'Rotation-invariant texture image retrieval using rotated complex wavelet filters', IEEE Trans.Syst., Man, Cybern. B, Cybern., 36(6): 1273-1282.

Kokare, M., P.K. Biswas, B.N. Chatterji, 2007. 'Texture image retrieval using rotated wavelet filters', Pattern Recogn. Lett., 28(10): 1240-1249.

Manjunath, B.S., W.Y. Ma, 1996. 'Texture features for browsing and retrieval of image data', IEEE Trans. Pattern Anal. Mach. Intell., 18(8): 837-842.

Minakshi, Banerjee, Sanghamitra and Yopadhyay, Sankar K. Pal, Rough Sets and Intelligent Systems", Volume 2,Springer link, pp: 391-395.

Rowan Seymour, Darryl Stewart, JiMing, 2008. 'Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos', EURASIP Journal on Image and Video Processing.

Unser, M., 1993. 'Texture classification by wavelet packet signatures', IEEE Trans. Pattern nal. Mach. Intell., 15(11): 1186-1191.

Zhenyu He, Xinge You, Yuan Yuan, 2009. 'Texture image retrieval based on non-tensor product wavelet filter banks', Signal Processing, 89: 1501-1510.

(1) Dr. K. Rajakumar, (2) Dr. S. Muttan, (3) G. Deepa, (4) S. Revathy, (5) B. Shanmuga Priya

(1) Professor in ECE, Adhiparasakthi College of Engineering, ANNA University, Chennai, India

(2) Professor & Head,Dept.of ECE, Anna University, Chennai-600 025, India

(3) Asst.Professor in Maths, VIT University, Vellore-632014, India

(4) M.E Student, Adhiparasakthi College of Engineering, ANNA University, India

(5) M.E Student, Adhiparasakthi College of Engineering, ANNA University, India

Corresponding Author: Dr. K. Rajakumar, Professor in ECE, Adhiparasakthi College of Engineering, ANNA University, Chennai, India.

Table 1: Comparison of assorted approaches with retrieval
efficiency Table.

Approach                   Proposed

                 Curvelet+H    Curvelet+PCA+MD
                 TF+PCA+ MD

Retrieval            92%             89%
Efficiency (%)

Approach                   Proposed

                  Wavelet+     Wavelet+PCA+ MD
                 PCA+HTF+MD

Retrieval            86%             82%
Efficiency (%)

Approach                      Existing

                  GLMeP(GT+LMeP)   DWT+RC WF+CD

Retrieval              80%             80%
Efficiency (%)

Approach                Existing

                 GMM+ KLD     DT-CWT+DT-
                               RCWF+C D

Retrieval           78%          77%
Efficiency (%)

Table 2: Comparison of the average precision for the
proposed methods with existing methods for top hundred matches.

No. of          Average Precision for Proposed Approaches
Top images
considered    Curvelet+P    Curvelet+P    Wavelet+    Wavelet+
               CA+HTF+MD      CA+MD      PCA+HTF+MD    PCA +MD

10                 1            1            1            1
20               0.97          0.96         0.95        0.92
30               0.95          0.93         0.92        0.90
40               0.90          0.89         0.86        0.83
50               0.88          0.86         0.84        0.81
60               0.85          0.82         0.81        0.78
70               0.80          0.78         0.77        0.75
80               0.75          0.72         0.70        0.68
90               0.71          0.69         0.67        0.65
100               67           0.65         0.64        0.62

No. of         Average Precision for Existing Approaches
Top images
considered        GLMeP        DWT+      GMM    DT-CWF+
                (GT+LM eP)   RCWF+CD    +KLD    RCWF+CD

10                  1           1         1        1
20                 0.90        0.91     0.90     0.88
30                 0.87        0.86     0.85     0.83
40                 0.81        0.80     0.79     0.77
50                 0.79        0.78     0.77     0.75
60                 0.75        0.73     0.72     0.70
70                 0.74        0.72     0.70     0.69
80                 0.67        0.67     0.66     0.64
90                 0.64        0.63     0.61     0.59
100                0.61        0.61     0.59     0.57

Table 3: Comparison of the average recall for the proposed
method with existing methods for top hundred matches.

                 Average Recall for Proposed Approaches (%)
No. of
Top images   Curvelet+    Curvelet+    Wavelet+PCA    Wavelet+
considered   PCA+HTF+MD     PCA+MD       +HTF+MD       PCA+MD

10              83.5         82.5          80.5          75
20              85.5         84.5          82.2          78
30               87           86            83           79
40              89.5         86.5           84          79.2
50              90.5         86.8          84.3         79.7
60              90.9          87           84.7         80.2
70              91.1         87.2           85          80.8
80              91.3         88.2          85.4         81.2
90              91.5         88.5          85.6         81.6
100              92           89            86           82

                   Average Recall for
                 Existing Approaches (%)
No. of
Top images     DWT+     GMM+KLD    DT-CWF+
considered   RCWF+CD               RCWF+CD

10              73         70         68
20              75        73.5       72.5
30             77.6       75.8       74.7
40              78        76.2        75
50             78.4       76.5       75.3
60             78.9       76.9       75.7
70             79.2       77.2        76
80             79.6       77.6       76.4
90             79.8       77.8       76.7
100             80         78         77
COPYRIGHT 2015 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Rajakumar, K.; Muttan, S.; Deepa, G.; Revathy, S.; Priya, B. Shanmuga
Publication:Advances in Natural and Applied Sciences
Article Type:Report
Date:Jun 1, 2015
Words:3331
Previous Article:Classification and comparison of ontology matching systems.
Next Article:Enhancement of Gaussian noise affected images using modified threshold function in curvelet domain.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |