Printer Friendly

Brain MR image classification using multiscale geometric analysis of Ripplet.


Magnetic resonance imaging (MRI) is a low-risk, fast, non-invasive imaging technique with no ionizing radiation hazard. MRI provides high quality and high contrast images of anatomical structures as well as functional images of different organs [1]. Soft tissue structures (heart, lungs, liver, brain and other organs) are clearer and more detailed with MRI than other medical imaging modalities. The noninvasive nature of MRI together with its rich information provision, makes MRI the widely used method for diagnosis and treatment planning [2-4]. Various researchers are not only trying to improve the MR image quality, but also seeking novel methods for easier and quicker inference from the images [5-7]. In recent years, MRI has emerged as one of the popular choice to study the human brain [8-10]. MRI can detect a variety of conditions of the brain such as cysts, tumors, bleeding, swelling, developmental and structural abnormalities, infections, inflammatory conditions, or problems with the blood vessels, etc.. It can determine whether a shunt is working or not, and detect damage to the brain caused by an injury or a stroke.

However, because of the huge amount of imaging data, the existing manual methods of analysis and interpretation of brain images are tedious, time consuming, costly and subject to fatigue of human observer. This necessitates the requirement of developing automated diagnosis tools to draw quicker and easier inferences from the MR images. These automated systems can be of great help for the medical personnel in diagnosis, prognosis, pre-surgical and post-surgical procedures, etc. [11,12]. One of the most distinguishable feature of a normal human brain is the symmetry, which is obvious in the axial and the coronal brain magnetic resonance (MR) images. Whereas, asymmetry in an axial MR brain image strongly indicates abnormality/disorder [11,13]. This symmetry-asymmetry can be modelled by various image and signal processing techniques, which can be used to classify the normal and abnormal brain MR images [6].

In recent years, various approaches of brain MR image classification have been proposed by different researchers. Chaplot et al. have achieved 94% and 98% accuracies through classifiers based on self-organizing map (SOM) and support vector machine (SVM), respectively. They have used discrete wavelet transform (DWT) for feature extraction [13]. In [14], Maitra and Chatterjee have shown that Slantlet transform can be combined with supervised classification (back-propagation neural network (BPNN)) technique to achieve 100% classification accuracy. Principal component analysis (PCA) is used to reduce the dimension of the feature vector obtained through DWT in [15] by El-Dahshan et al. They have achieved 97% and 98% success-rates through feed-forward BPNN and k-nearest neighbor (kNN) classifiers, respectively. Recently, Zhang et al. have proposed several advanced techniques for brain MR image classification with high classification accuracies [12,16-18]. In all the proposed techniques, they have used DWT for feature extraction, PCA for feature dimension reduction and different classifiers with various weight optimization schemes (forward neural network (FNN) + scaled chaotic artificial bee colony (SCABC) in [12], FNN + adaptive chaotic particle swarm optimization (ACPSO) in [16], BPNN + scaled conjugate gradient (SCG) in [17], kernel SVM (KSVM) with different kernels: linear (LIN), homogeneous polynomial (HPOL), inhomogeneous polynomial (IPOL) and Gaussian radial basis (GRB) in [18]) for segregating the normal and abnormal MR images.

Most of the existing brain MR image classification systems suffer from several shortcomings:

(i) These systems are based on DWT or variants of DWT, which has several problems: limited directionality, non-supportiveness to anisotropy, etc.. Therefore, DWT cannot capture the subtle and intrinsic details of the brain MR images, which are required for segregating normal and abnormal cases.

(ii) Although, most of the state-of-the-art schemes utilize PCA for feature reduction to achieve computational efficiency. Because of the DWT + PCA combination, the dimension of the reduced feature vector is still comparatively high.

(iii) Moreover, most of these schemes use neural network with complex weight optimization techniques, which leads to high computational complexity.

(iv) Furthermore, most of the existing techniques lack the generalization capability, as these systems work well on small particular dataset, but, fail to work efficiently for different datasets of various sizes with varying classes of diseases.

In general, most of the existing MR brain image classification systems consist of three different phases: feature extraction, feature reduction and classification. The proposed technique consists of similar process blocks.

The main motivation of this work is to develop an automatic MR brain image classification system with less computational complexity and high classification accuracy. The other motivation is to make the technique general so that it can work equally efficiently for different brain MR datasets, consisting varying number of diseases classes. The main advantages of the proposed scheme can be listed as follows:

(i) A fully automatic and accurate system for MR brain image classification.

(ii) The proposed scheme is based on RT, which is superior than DWT. RT has superior localization capability in both spatial and frequency domain. Moreover, RT can capture 2D singularities along different curves in any direction, because of the general scaling with arbitrary degree and support, and thus providing better image representation.

(iii) Applying PCA on the coefficients of RT decomposed image, the feature vector dimension reduces to only 9 (much less than used by the existing schemes) which results in computational efficiency.

(iv) The use of LS-SVM further reduces the computational cost by avoiding solving quadratic programming problem, and

(v) The proposed system is more effective than the state-of-the-art techniques: it works efficiently on datasets with different sizes and consisting abnormal brain MR images from more number of disease classes.

All these above mentioned advantages make the proposed technique an effective and accurate system for MR brain image classification.

The rest of the paper is organized as follows. Theoretical backgrounds of RT, feature reduction and LS-SVM is described in Section 2, Section 3 and Section 4, respectively. Section 5, presents the proposed system. Experimental results and comparisons are given in Section 6 and we draw conclusion in Section 7.


DWT and its variants were used extensively by various researchers for feature extraction in MR brain image classification [12-18]. But the problem with DWT is that it is inherently non-supportive to directionality and anisotropy. To address these problems, Jun Xu et al. proposed a new MGA-tool called RT [19]. RT is a higher dimensional generalization of the Curvelet Transform (CVT), capable of representing images or 2_D signals at different scales and different directions. To achieve anisotropic directionality, CVT uses a parabolic scaling law [20]. From the perspective of micro-local analysis, the anisotropic property of CVT guarantees resolving 2D singularities along [C.sup.2] curves. Whereas, RT provides a new tight frame with sparse representation for images with discontinuities along [C.sup.d] curves [19].

There are two questions regarding the scaling law used in CVT: 1) Is the parabolic scaling law optimal for all types of boundaries? and if not, 2) What scaling law will be optimal? To address these questions, Jun Xu et al. intended to generalize the scaling law, which resulted in RT. RT generalizes CVT by adding two parameters, i.e., support c and degree d. CVT is just a special case of RT with c = 1 and d = 2. The anisotropy capability of representing singularities along arbitrarily shaped curves of RT, is due to these new parameters c and d.

As digital image processing needs discrete transform instead of continuous transform, here we describe the discretization of RT [19]. The discretization of continuous RT is based on the discretization of the parameters of ripplet functions. The scale parameter a is sampled at dyadic intervals. The position parameter b and the rotation parameter [theta] are sampled at equal-spaced intervals. [a.sub.j], [[??].sub.k] and [[theta].sub.l] substitute a, [??] and [theta], respectively, and satisfy that [a.sub.j] = [2.sup.-j], [[??].sub.k] = [[c x [2.sup.-j] x [k.sub.1], [2.sup.-j/d] x [k.sub.2]].sup.T] and [[theta].sub.1] = [2[product]/c] x [2.sup.-[j(1-1/d)]] x l, where k = [[[k.sub.1], [k.sub.2]].sup.T], and j, [k.sub.1], [k.sub.2], l [member of] Z. [(*).sup.T] denotes the transpose of a vector. d [member of] R, since any real number can be approximated by rational numbers, so we can represent d with d = n/m, n, m [not equal to] 0 [member of] Z. Usually, we prefer n, m [member of] N and n, m are both primes. In the frequency domain, the corresponding frequency response of ripplet function is in the form

[[??].sub.j](r, [omega]) = 1/[square root of c][a.sup.m+n/2n] W ([2.sup.-j] x r) ([1/c] x [2.sup.-[j[m-n]/n]] x [omega] - 1) (1)

where, W and F are the radial-window and the angular-window, respectively. These two windows satisfy the following admissibility conditions:

[+[infinity].summation over (j=0)] [[absolute value of (W ([2.sup.-j] x r))].sup.2] = 1 (2)

[+[infinity].summation over (l=-[infinity])] [[absolute value of (V([1/2] x [2.sup.-j(1-1/d)]] x [omega] - l))].sup.2] = 1 (3)

given c, d and j. These two windows partition the polar frequency domain into 'wedges'. The 'wedge' corresponding to the ripplet function in the frequency domain is

[H.sub.j,l](r, [theta]) = {[2.sup.j] [less than or equal to] [absolute value of r] [less than or equal to] [2.sup.2j], [absolute value of [theta] - [[pi]/c] x [2.sup.-[j(1-1/d)]] x l] [less than or equal to] [[pi]/2] [2.sup.-j]} (4)

The discrete RT of an M x N image f ([n.sub.1], [n.sub.2]) will be in the form of


where, [R.sub.j,[??],l] are the ripplet coefficients.

As a generalized version of CVT, RT is not only capable of resolving 2D singularities, but it also has some useful properties:

(i) It forms a new tight frame in a function space. Having good capability of localization in both spatial and frequency domain, it provides a more efficient and effective representation for images or 2D signals.

(ii) It has general scaling with arbitrary degree and support, which can capture 2D singularities along different curves in any directions.

Jun Xu et al. have showed that RT can provide a more effective representation for images with singularities along smooth curves [19]. It outperforms discrete cosine transform and DWT in nonlinear approximation, when the number of retained coefficients is small. RT can achieve roughly 2 dB higher PSNR on average than JPEG, and provide better visual quality than JPEG2000 at low bit-rates, when applied to image compression. In case of image denoising application, RT performs better than CVT and DWT. RT produces high quality fused images, when applied in the medical image fusion domain [21]. All these experiments show that RT based image coding is suitable for representing texture or edges in images.


There are two different feature reduction phases in the proposed scheme. Here, we briefly describe these phases.

Let, N x N be the size of an brain MR image I, N = 2m, m [member of] [Z.sup.+]. After applying RT on I, let the size of the decomposed low frequency subband (LFS) be M x M, M = 2p, p [member of] [Z.sup.+]. Therefore, the first feature reduction happens for the transition: N x N [right arrow] M x M, M < N.

The second feature reduction is achieved using PCA [22]. Let, [T.sub.var] be the total variance of the original feature set, and [S.sub.var] be the total variance of the reduced feature set having dimension d, d [much greater than] [M.sup.2]. The proper value of d is selected: [S.sub.var]/[T.sub.var] [equivalent] 0.9. Let, [] be the achieved feature reduction percentage.

[therefore] [] = [[[N.sup.2] - d]/[N.sup.2]] x 100% (6)


The main drawback of support vector machine (SVM) is its high computational complexity for large dimensional datasets. To reduce the computational burden, the least square version of SVM (LS-SVM) is adopted as classifier in this paper. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM's [23]. Considering a linearly separable binary classification problem:

[([x.sub.i], [y.sub.i]).sup.n.sub.i=1] and [y.sub.i] = {+1,-1} (7)

where, [x.sub.i] is an n-dimensional vector and [y.sub.i] is the label of this vector, LS-SVM can be formulated as the optimization problem:


subject to the equality constraint:

[y.sub.i] [w'[phi]([x.sub.i]) + b] = 1 - [e.sub.i] (9)

where, C > 0 is a regularization factor, b a bias term, w the weight vector, [e.sub.i] the difference between the desired output and the actual output, and [phi]([x.sub.i]) a mapping function.

The lagrangian for the problem of Eq. (8) is defined as follows:


where, [[alpha].sub.i] are Lagrange multipliers. The Karush-Kuhn-Tucker (KKT) conditions for optimality [partial derivative]L/[partial derivative]w = 0 [right arrow] w = [[summation].sup.n.sub.i=1] [[alpha].sub.i][y.sub.i][phi]([x.sub.i]); [partial derivative]L/[partial derivative][e.sub.i] = 0 [right arrow] [[alpha].sub.i] = C[e.sub.i]; [partial derivative]L/[partial derivative]b = 0 [right arrow] [[summation].sup.n.sub.i=1] [[alpha].sub.i][y.sub.i] = 0; [partial derivative]L/[partial derivative][[alpha].sub.i] = 0 [right arrow] [y.sub.i][w'([x.sub.i]) + b] - 1 + [e.sub.i] = 0, is the solution to the following linear system


where, [phi] = [[phi]([x.sub.1])' [y.sub.1], ..., [phi]([x.sub.n])' [y.sub.n]], Y = [[y.sub.1], ..., [y.sub.n]], [bar.1] = [1, ..., 1], and [alpha] = [[[alpha].sub.1], ..., [[alpha].sub.n]].

For a given kernel function K(,) and a new test sample point x, the LS-SVM classifier is given by

f(x) = sgn [[n.summation over (i=1)] [[alpha].sub.i][y.sub.i]K(x, [x.sub.i]) + b (12)


The proposed system consists of two phases as shown in the block diagram of Fig. 1: an offline phase and an online phase. Both the phases, consist of the following steps: feature extraction based on RT, feature reduction through PCA and classification by LS-SVM classifier.

5.1. Offline Phase

Let, n be the number of training images of size N x N, N = 2m, m [member of] [Z.sup.+]. The salient steps of the offline (training) phase are listed next:

Step 1: The training images are decomposed by RT to get the low-frequency subbands (LFSs) and high-frequency subbands (HFSs). Only the LFSs (one LFS/image) are used as features. Let, M x M (M < N) be the size of each LFS, M = 2p, p [member of] [Z.sup.+]. A feature matrix X of size n x [M.sup.2] is constructed by the coefficients of the LFSs. Each row of X consists of [M.sup.2] coefficients belonging to a particular LFS, representing feature vector of dimension [M.sup.2] of that image.

Step 2: The dimension of the feature vectors representing the training images is reduced by applying PCA on X from [M.sup.2] to d (say), where d [much less than] [M.sup.2], following the criteria mentioned in Section 3.

Step 3: The set of reduced feature vectors, along with the class information are used to train a LS-SVM classifier. Cross validation is used for improving the generalization capability of the system.

5.2. Online Phase

The online phase of the proposed system consists of the following steps:

Step 1: The user (doctors, radiologist, etc.) inputs the brain MR image of size N x N to be classified. RT is applied on the input image to get the LFS of size M x M.

Step 2: The dimension of the feature vector representing the input query image is reduced from [M.sup.2] to d by applying PCA.

Step 3: This reduced feature vector of dimension d is used as input to the previously trained LS-SVM classifier. The classifier classified the input query image as normal or abnormal.


Extensive experiments were carried out to evaluate the performance of the proposed system in brain MR image classification.

6.1. Experimental Setup

We implemented the proposed technique in MATLAB, and experiments were carried out on a PC with 2.66 GHz CPU and 4 GB RAM. The decomposition parameter of RT was levels = [1, 2, 4, 4], and we used '9/7' and 'pkva' as the pyramid filter and orientation filter, respectively. Three (3) different MR image datasets were used in the experiments. All the datasets consist of T2-weighted MR brain images in axial plane and 256 x 256 in-plane resolution, which were downloaded from the website of Harvard Medical School (URL: The first two datasets are benchmark datasets, widely used in brain MR image classification problem, and consist of abnormal images from 7 types of diseases along with the normal images. The abnormal brain MR images of the benchmark datasets consist images of the following diseases: glioma, meningioma, Alzheimer's disease, Alzheimer's disease plus visual agnosia, Pick's disease, sarcoma and Huntington's disease. The first benchmark dataset (Dataset-66) consists of 66 (18 normal and 48 abnormal) brain MR images. There are in total 160 (20 normal and 140 abnormal) brain MR images in the second benchmark dataset (Dataset-160). The third new larger dataset (Dataset-255) consists of 255 (35 normal and 220 abnormal) brain MR images. Abnormal brain MR images of the third dataset are from 11 types of diseases, among which 7 types of diseases are same as the two benchmark datasets, mentioned before. The third dataset also consists abnormal images of 4 new types of diseases: chronic subdural hematoma, cerebral toxoplasmosis, herpes encephalitis and multiple sclerosis. Each of the 11 types of diseases' consists of 20 abnormal brain MR images. Fig. 2, shows samples of the brain MR images used in the experiments.

To make the LS-SVM classifier more reliable and generalize to independent datasets, 5 x 5-fold and 5 x 6-fold stratified cross validations (CV) were employed. 5 x 6-fold stratified CV was used for Dataset-66, and for the other two datasets 5 x 5-fold stratified CV was carried out. For training of the LS-SVM, we used the Radial Basis Function (RBF): K([x.sub.i], [x.sub.j]) = exp(-[gamma] [[parallel][x.sub.i] - [x.sub.j][parallel].sup.2]), [gamma] > 0, as the kernel. There are two tunable parameters while using the RBF kernel in LS-SVM classifier: C and [gamma]. The kernel parameter [gamma] controls the shape of the kernel and regularization parameter C controls the tradeoff between margin maximization and error minimization. It is not known beforehand which values of C and [gamma] are the best for the classification problem at hand. Hence, various pairs of (C, [gamma]) were tried with over the course of the CV procedure, and the one with the lowest CV error rate was picked, where C [member of] [1, 10] and [gamma] [member of] [1, 3]. After finding the best values for the parameters C and 7, these values were used to train the LS-SVM model, and the test set was used to measure the error rate of the classification system. Tables 1-3, show the settings of the training and the validation images for the datasets used in the experiments.

To compare the performance of our proposed method with the state-of-the-art techniques, we implemented several of the existing schemes. Quantitative evaluation of the proposed system and it's performance comparison with other state-of-the-art techniques were analyzed using the following statistical measures:

Sensitivity (true positive fraction): is the probability that a diagnostic test is positive, given that the person has the disease.

Sensitivity = TP/[TP + FN] (13)

Specificity (true negative fraction): is the probability that a diagnostic test is negative, given that the person does not has the disease.

Specificity = TN/[TN + FP] (14)

Accuracy: is the probability that a diagnostic test is correctly performed.

Accuracy = [TP + TN]/[TP + TN + FP + FN] (15)


TP (True Positive)--correctly classified positive cases,

TN (True Negative)--correctly classified negative cases,

FP (False Positive)--incorrectly classified negative cases, and

FN (False Negative)--incorrectly classified positive cases.

6.2. Results and Discussions

Several different experiments were carried out to evaluate the performance of the proposed system in light of feature reduction efficiency, classification accuracy, comparisons with other state-of-the-art schemes and computation complexity analysis.

The proposed system is based on the MGA of the LFS coefficients obtained by RT decomposition. With the above mentioned RT decomposition configuration, the size of the LFS is 32 x 32. PCA is used to reduce the feature vector size to only 9, where these 9 features are the first 9 principal components, preserving [equivalent] 90% of total variance of the RT decomposed features. This reduced feature set is only 0.88% and 0.014% of the initial feature set considering LFS and original image, respectively. Therefore, due to the RT + PCA combination we have achieved 99% feature reduction. The systems proposed in [12,16-18] have used 19 principal components as the image representing feature vector. We have not only achieved 47.39% feature reduction from the state-of-the-art brain MR image classification techniques, but also higher performance in terms of accuracy. To find out the proper number of principal components, which give the best result, the performance of the proposed system was experimented with different numbers of principal components (1)-(19). The graphs of the Figs. 3, 4, and 5, show the performance of the proposed system in terms of Sensitivity, specificity and classification accuracy for the three datasets used in the experiments with different numbers of principal components, respectively. It is clear from the results given in Figs. 3, 4, and 5, that our proposed system works efficiently for all the three datasets using only 9 principal components for image representation. We have achieved the best results for all the statistical measures used to evaluate the performance of the proposed system, considering only 9 principal components: Sensitivity (1.00, 1.00, 0.97), specificity (1.00, 1.00, 0.99) and classification accuracy (100%, 100%, 99.39%), for the datasets-66, 160 and 255, respectively. The classification accuracy of the proposed system is also evaluated through receiver operating characteristic (ROC) curves, shown in Fig. 6. The proposed technique correctly classified the MRI images of Dataset-66 and Dataset-160 with an average area under curve (AUC) of 100%, with 0% standard deviation. For the Dataset-255 we achieved AUC of 99.45% ([+ or -] 0.0046%).

We compared the performance of the proposed system with 13 state-of-the-art brain MR image classification schemes. As mentioned earlier, we implemented all these methods for proper performance comparison, and the performances of the implemented methods were evaluated using all the three datasets. The comparison results are shown in Table 4. The Table 4, also shows the feature vector dimension for each of the schemes. The schemes described in [13] give the worst performance results in terms of accuracy. Moreover, these methods have the highest feature dimension (4761 features/image), which results in high computational complexity. The dimension (7) of the feature vector used in [15], is less than our proposed method (9). But it is obvious from the results of Table 4, that the method described in [15] is less efficient and general than the proposed scheme, in terms of classification accuracy. The techniques described in [12,16-18] show improved results in brain MR image classification, with lower feature vector dimension (19). But, these schemes use various complex weight optimization techniques, which themselves require high computational complexity. Whereas, our proposed system only requires feature vector of dimension 9, with the highest retrieval accuracies.

During time requirement analysis we used all the images of all the three datasets for computing the overall time requirement of the proposed scheme. The computation times (except the LS-SVM training) of all the constituting stages (feature extraction, feature reduction and classification) of the proposed system was recorded, and we found out the average values as the time requirement. The feature extraction, feature reduction and LS-SVM classification average time requirements for an MRI image of size 256 x 256 are 0.026 seconds, 0.014 seconds and 0.002 seconds, respectively. The overall average computation time for each MRI image of size 256 x 256 is about 0.042 seconds. Even though, the time requirement for feature extraction through RT is slightly greater than DWT, but due to the use of LS-SVM with only 9 principal components results in lower overall time requirement. Moreover, with 9 dimensional feature vector for each training image, the storage cost of stored image feature database is also reduced. From the above mentioned results and discussions, it is clear that the proposed system not only performs the best, among all the mentioned state-of-the-art brain MR image classification techniques, but also works efficiently with different sizes' of datasets and various disease classes.


The manual process of diagnosis brain MR images has several problems. This necessitates the requirement for developing diagnostic tools, which can automatically and accurately classify brain MR images as normal or abnormal. Even though, there exists several advanced schemes to achieve this goal. Most of the state-of-the-art techniques have various shortcomings: based on DWT which can not capture the subtle and intrinsic details of the brain MR images, relatively high feature vector dimensionality, use of neural network with computationally expensive complex weight optimization techniques, and lack of generalization capability, etc. In this article, we propose to combine the benefits of MGA of RT, and a computationally less expensive SVM (LS-SVM) to build a fully automatic and accurate brain MR image classification system. With this combination, we not only achieve higher feature reduction, but also acquire superior performance than the state-of-the-art schemes. Extensive experiments and comparison show the effectiveness of the proposed system. In future, we will investigate the effectiveness of other transforms along with other supervised and un-supervised classification schemes for brain MR image classification.


[1.] Westbrook, C., Handbook of MRI Technique, John Wiley & Sons, 2008.

[2.] Scapaticci, R., L. Di Donato, I. Catapano, and L. Crocco, "A feasibility study on microwave imaging for brain stroke monitoring," Progress In Electromagnetics Research B, Vol. 40, 305-324, 2012.

[3.] Prasad, P. V., Magnetic Resonance Imaging: Methods and Biologic Applications (Methods in Molecular Medicine), Humana Press, 2005.

[4.] Asimakis, N. P., I. S. Karanasiou, P. K. Gkonis, and N. K. Uzunoglu, "Theoretical analysis of a passive acoustic brain monitoring system," Progress In Electromagnetics Research B, Vol. 23, 165-180, 2010.

[5.] Mohsin, S. A., N. M. Sheikh, and U. Saeed, "MRI induced heating of deep brain stimulation leads: Effect of the air-tissue interface," Progress In Electromagnetics Research, Vol. 83, 81-91, 2008.

[6.] Maji, P., M. K. Kundu, and B. Chanda, "Second order fuzzy measure and weighted co-occurrence matrix for segmentation of brain MR images," Fundamenta Informaticae, Vol. 88, Nos. 1-2, 161-176, 2008.

[7.] Golestanirad, L., A. P. Izquierdo, S. J. Graham, J. R. Mosig, and C. Pollo, "Effect of realistic modeling of deep brain stimulation on the prediction of volume of activated tissue," Progress In Electromagnetics Research, Vol. 126, 1-16, 2012.

[8.] Mohsin, S. A., "Concentration of the specific absorption rate around deep brain stimulation electrodes during MRI," Progress In Electromagnetics Research, Vol. 121, 469-484, 2011.

[9.] Rombouts, S. A., F. Barkhof, and P. Scheltens, Clinical Applications of Functional Brain MRI, Oxford University Press, 2007.

[10.] Oikonomou, A., I. S. Karanasiou, and N. K. Uzunoglu, "Phased array near field radiometry for brain intracranial applications," Progress In Electromagnetics Research, Vol. 109, 345-360, 2010.

[11.] Maji, P., B. Chanda, M. K. Kundu, and S. Dasgupta, "Deformation correction in brian MRI using mutual information and genetic algorithm," Proc. Int. Conf. Computing: Theory and Applications, 372-376, 2007.

[12.] Zhang, Y., L. Wu, and S. Wang, "Magnetic resonance brain image classification by an improved artificial bee colony algorithm," Progress In Electromagnetics Research, Vol. 116, 65-79, 2011.

[13.] Chaplot, S., L. M. Patnaik, and N. R. Jagannathan, "Classification magnetic resonance brain images using wavelets as input to support vector machine and neural network," Biomedical Signal Processing and Control, Vol. 1, No. 1, 86-92, 2006.

[14.] Maitra, M. and A. Chatterjee, "A Slantlet transform based intelligent system for magnetic resonance brain image classification," Biomedical Signal Processing and Control, Vol. 1, No. 4, 299-306, 2006.

[15.] El-Dahshan, E.-S. A., T. Hosny, and A.-B. M. Salem, "Hybrid intelligent techniques for MRI brain images classification," Digital Signal Processing, Vol. 20, No. 2, 433-441, 2010.

[16.] Zhang, Y., S. Wang, and L. Wu, "A novel method for magnetic resonance brain image classification based on adaptive chaotic PSO," Progress In Electromagnetics Research, Vol. 109, 325-343, 2010.

[17.] Zhang, Y., Z. Dong, L. Wu, and S. Wang, "A hybrid method for MRI brain image classification," Expert Systems with Applications, Vol. 38, No. 8, 10049-10053, 2011.

[18.] Zhang, Y. and L. Wu, "An MR brain images classifier via principal component analysis and kernel support vector machine," Progress In Electromagnetics Research, Vol. 130, 369-388, 2012.

[19.] Xu, J., L. Yang, and D. Wu, "Ripplet: A new transform for image processing," Journal of Visual Communication and Image Representation, Vol. 21, No. 7, 627-639, 2010.

[20.] Candes, E. J. and D. L. Donoho, "Continuous curvelet transform: I. Resolution of the wavefront set," Applied and Computational Harmonic Analysis, Vol. 19, No. 2, 162-197, 2005.

[21.] Das, S., M. Chowdhury, and M. K. Kundu, "Medical image fusion based on ripplet transform type-I," Progress In Electromagnetics Research B, Vol. 30, 355-370, 2011.

[22.] Jolliffe, I. T., Principal Component Analysis, Springer, 2002.

[23.] Suykens, J. A. K. and J. Vandewalle, "Least squares support vector machine classifiers," Neural Processing Letters, Vol. 9, No. 3, 293-300, 1999.

Sudeb Das *, Manish Chowdhury, and Malay K. Kundu

Machine Intelligence Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata-108, India

Received 1 January 2013, Accepted 4 February 2013, Scheduled 6 February 2013

* Corresponding author: Sudeb Das (

Table 1. Setting of one pass of 6-fold stratified CV for Dataset-66.

Total No. of Images           Training (55)       Validation (11)

Normal (18)   Abnormal (48)   Normal   Abnormal   Normal   Abnormal

                   66           15        40        3         8

Table 2. Setting of one pass of 5-fold stratified CV for Dataset-160.

Total No. of Images            Training (128)      Validation (32)

Normal (20)   Abnormal (140)   Normal   Abnormal   Normal   Abnormal

                   160           16       112        4         28

Table 3. Setting of one pass of 5-fold stratified CV
for Dataset-255.

Total No. of Images                    Training (204)

Normal (35)           Abnormal (220)       Normal       Abnormal

                           255               28           177

Total No. of Images   Validation (51)

Normal (35)               Normal        Abnormal

                             7             43

Table 4. Performance comparison using the three datasets.

Scheme                   Feature    Accuracy (%)

DWT + SOM [13]            4761        94.00
DWT + SVM + LIN [13]      4761        96.15
DWT + SVM + POLY [13]     4761        98.00
DWT + SVM + RBF [13]      4761        98.00
DWT + PCA + FNN [15]        7         97.00
DWT + PCA + kNN [15]        7         98.00
DWT + PCA + FNN            19         100.00
  + ACPSO [16]
DWT + PCA + BPNN           19         100.00
  + SCG [17]
DWT + PCA + FNN            19         100.00
  + SCABC [12]
DWT + PCA + KSVM           19         96.01
  + LIN [18]
DWT + PCA + KSVM          ] 19        98.34
  + HPOLY [18
DWT + PCA + KSVM           19         100.00
  + IPOLY [18]
DWT + PCA + KSVM           19         100.00
  + GRB [18]
Proposed Scheme             9         100.00

Scheme                  Accuracy (%)

                        Dataset-160    Dataset-255

DWT + SOM [13]             93.17          91.65
DWT + SVM + LIN [13]       95.38          94.05
DWT + SVM + POLY [13]      97.15          96.37
DWT + SVM + RBF [13]       97.33          96.18
DWT + PCA + FNN [15]       96.98          95.29
DWT + PCA + kNN [15]       97.54          96.79
DWT + PCA + FNN            98.75          97.38
  + ACPSO [16]
DWT + PCA + BPNN           98.29          97.14
  + SCG [17]
DWT + PCA + FNN            98.93          97.81
  + SCABC [12]
DWT + PCA + KSVM           95.00          94.29
  + LIN [18]
DWT + PCA + KSVM           96.88          95.61
  + HPOLY [18
DWT + PCA + KSVM           98.12          97.73
  + IPOLY [18]
DWT + PCA + KSVM           99.38          98.82
  + GRB [18]
Proposed Scheme            100.00         99.39
COPYRIGHT 2013 Electromagnetics Academy
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Das, Sudeb; Chowdhury, Manish; Kundu, Malay K.
Publication:Progress In Electromagnetics Research
Article Type:Report
Geographic Code:9INDI
Date:May 1, 2013
Previous Article:Directional coupler with good restraint outside the passband and its frequency-agile application.
Next Article:Efficient implementation of the capon beamforming using the Levenberg-Marquardt scheme for two dimensional AOA estimation.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters