Heart Disease Diagnosis Utilizing Hybrid Fuzzy Wavelet Neural Network and Teaching Learning Based Optimization Algorithm.
Heart disease is a term that refers to any disturbance that makes the heart function abnormally . When the coronary arteries are narrowed or blocked, the blood flow to the myocardium is decreased. This represents the main reason for the emergence of heart disease in humans. There are several risk factors for this disease, including diabetes, smoking, obesity, and a family history of heart disease, high cholesterol, and high blood pressure [1-3].
Actually, the incidence rate of heart disease is on the rise. Every year about 720,000 Americans have a heart attack. Among them, 515,000 have their first heart attack and 205,000 people have a second (or third, etc.) heart attack. Heart disease causes the death of about 600,000 people in the United States every year, which makes it responsible for one in every four deaths .
Due to the large number of patients with heart disease, it became necessary to have a powerful tool that works accurately and efficiently for diagnosing this disease and helping the physicians make decisions about their patients. This is because the process of diagnosis and decision making is a difficult task and needs a lot of experience and skill.
Recently, a lot of research has been published regarding the field of medical diagnosis of heart disease. In 2008, Kahramanli and Allahverdi designed a hybrid system that represents a combination between fuzzy neural network (FNN) and artificial neural network (ANN), trained by a backpropagation (BP) algorithm. The results have shown that the proposed method is comparable with other methods . Besides, Das et al. in 2009 proposed a system for diagnosing heart disease by using the neural network ensemble model, which combines several individual neural networks that are trained on the same task. However, this method increased the complexity and therefore the execution time . In 2011, Khemphila and Boonjing presented a classification approach to diagnose heart disease using multilayer perceptron (MLP) with a backpropagation learning algorithm, as well as a feature selection algorithm to use 8 features instead of 13. The results have shown that the accuracy rate was enhanced by 1.1% in the training data set and by 0.82% in the testing data set . In 2013, Beheshti et al. applied the centripetal accelerated particle swarm optimization (CAPSO) to evolve the learning of the artificial neural network (ANN), which was used to classify a heart disease data set. The results have shown that the diagnosis rate still needs to be improved . From all previous studies, the classification accuracy did not reach a desirable level. However, the main objective of all previous studies was to make the diagnosis process of heart disease more accurate and efficient.
Moreover, fuzzy neural network (FNN) is the combination of neural network and fuzzy logic in one system that contains both the interpretability property and inference ability of fuzzy logic to handle the uncertainty and the self-learning ability of a neural network to improve the approximation accuracy [8-10]. In spite of the fact that the FNN/NN has many advantages, it still suffers from some drawbacks, such as slow training speed, high approximation error, and poor convergence problems .
In recent years, many researchers have emphasized the use of wavelet neural network (WNN), which combines the wavelet function and a neural network. WNN integrates the learning capability of NN with the decomposition capability [8, 11], orthogonality , and time frequency localization properties [10, 12-14] of the wavelet function. The main advantages of the WNN are better generalization capability , faster learning property [8,15], and smaller approximation errors and size of networks than NN . Therefore, it is able to overcome the obstacles of FNN/NN, especially in highly nonlinear systems .
According to the mentioned properties of the WNN and fuzzy system, fuzzy wavelet neural network (FWNN) will be considered in our work. FWNN has been presented in some application areas: function learning , chaotic time series identification , and the approximation of function, identification of nonlinear dynamic plants, and prediction chaotic time series . FWNN combines the main advantages of a fuzzy system, wavelet function, and neural networks and, therefore, could bring the low level learning and good computational ability of WNN into a fuzzy system and the high humanlike thinking of fuzzy system into the WNN .
The training process of FWNN is a crucial task and requires robust optimization techniques to make the performance of this network more accurate and efficient. According to [18-21], which make a comparison study among some naturally inspired optimization algorithms, such as particle swarm optimization (PSO) , differential evolution (DE) , teaching learning based optimization (TLBO) , artificial bee colony (ABC) , and firefly algorithm (FA) , the TLBO algorithm is accurate, effective, and efficient and shows superior performance in comparison to others. The attractive property of the TLBO algorithm is that it is a simple mathematical model and is one of the most powerful tools to find the optimal solution in a shorter computational time period . In addition, TLBO has balanced exploration and exploitation abilities so it does not get stuck in local minima .
In accordance with the above-mentioned advantages of both FWNN and TLBO, in this paper a new method (TLBO-FWNN) is proposed to increase the efficiency of the heart disease diagnosis process.
The rest of this paper is organized as follows: section two represents a background about FWNN; section three explains the TLBO algorithm; in section four, the proposed method of TLBO-FWNN is illustrated; section five describes the heart disease data set; section six demonstrates the K-fold cross validation; and finally section seven presents and discusses the experimental results.
2. The Background of Fuzzy Wavelet Neural Network (FWNN)
One of the most commonly used methods for diagnosing of heart disease is artificial neural network (ANN) [6,7]. ANN is considered an effective artificial intelligence method if it has enough training data. The description of an ANN structure is a difficult task . Fuzzy logic  has the ability to deal with cognitive uncertainties in a manner like humans . Fuzzy logic is used for improving the ability of the neural network and increasing its learning rate . So, the combination of a neural network and fuzzy logic in one system leads to the creation of another powerful computational tool called fuzzy neural network (FNN), which combines the advantages of both approaches [30, 31].
Most fuzzy neural networks use the sigmoid function as the activation function in the hidden layer, but this type of function leads the training algorithm to converge to the local minima  and generally decreases the convergence speed of the network. Therefore, the wavelet function is used as an alternative to the sigmoid function [33, 34]. The wavelet function is a waveform that has limited duration, and the mean value of this duration is zero. This function has two parameters, the dilation parameter and the translation parameter .
The combination of the wavelet theory with a fuzzy system and neural network generates the so-called fuzzy wavelet neural network (FWNN) . The structure of the FWNN is described in Figure 1, which contains seven layers [17, 33, 34].
(i) In the first layer, the number of nodes is equal to the number of input variables.
(ii) In the second layer, each node represents one fuzzy set, in which the calculation for the membership value of the input variable to the fuzzy set is carried out.
(iii) In the third layer, each node represents one fuzzy rule. Therefore, the number of nodes is related to the number of fuzzy rules. The output of this layer can be calculated using the following:
[mathematical expression not reproducible], (1)
where [pi] refers to the AND operation and [A.sup.i.sub.j] represents the membership function used to calculate the membership degrees of the input variables. In this study, a Gaussian membership function is used that can be calculated using
[A.sup.i.sub.j] ([x.sub.j]) = exp [- [([x.sub.j] - [c.sup.i.sub.j]/[[sigma].sup.i.sub.j]).sup.2]]. (2)
In (6), [c.sup.i.sub.j] and [[sigma].sup.i.sub.j] represent the center and the width of the membership function, respectively.
(iv) The fourth layer contains the wavelet functions [[psi].sub.ij] in its neurons, which represents the consequent part of the fuzzy rules. In FWNN, fuzzy rules can be represented by the following equation:
[R.sup.i] : if [x.sub.1] is [A.sup.i.sub.1], [x.sub.2] is [A.sup.i.sub.2], ..., [x.sub.q] is [A.sup.i.sub.q] (3)
then [y.sub.i] = [w.sub.i] [q.summation over j=1)] [[psi].sub.ij] ([x.sub.j]),
where [R.sup.i] refers to the ith rule, and (1 [less than or equal to] i [less than or equal to] c), where c is the number of the rule. [x.sub.j] refers to the jth input, and (1 [less than or equal to] j [less than or equal to] q), where q is the number of input parameters, while [y.sub.i] refers to the ith output from the wavelet neural network (WNN).
In the wavelet neural network, which is described in Figure 2, the output can be calculated using
[y.sub.i] = [w.sub.i] [q.summation over (j=1)] [[psi].sub.ij] ([x.sub.j]), (4)
where [w.sub.i] refers to the weight coefficients and [[psi].sub.ij] represents a set of wavelet functions, which is called a wavelet family that can be defined computationally using
[[psi].sub.ij]([x.sub.j]) = [psi]([x.sub.j] - [b.sub.ij]/[a.sub.ij]), [a.sub.ij] [not equal to] 0, (5)
where [a.sub.ij] and [b.sub.ij] represent the dilation and translation parameters, respectively, and [psi] refers to the mother wavelet that can be calculated using the Mexican hat wavelet function that can be represented in
[psi](x) = 1/[square root of ([absolute value of (a)])] (1 - 2 [x.sup.2]) exp (-[x.sup.2]/2). (6)
(v) In the fifth layer, multiplication between the outputs of the third layer and the outputs of the fourth layer is done. The output of this layer ([L.sub.5]) can be calculated using
[L.sub.5] (i) = [[mu].sub.i] * [y.sub.i], (7)
where 1 [less than or equal to] i [less than or equal to] c and c refers to the number of fuzzy rules and wavelet functions.
(vi) In the sixth layer, the output involves two parts. The first part ([L.sup.a.sub.6]) aggregates the outputs of the fifth layer, which can be represented using
[L.sup.a.sub.6] = [c.summation over (i=1)] [[mu].sub.i] x [y.sub.i]. (8)
The second part ([L.sup.b.sub.6]) aggregates the outputs of the third layer, which can be represented using
[L.sup.b.sub.6] = [c.summation over (i=1)] [[mu].sub.i]. (9)
(vii) The seventh layer represents the defuzzification process, which is used to calculate the overall output of FWNN by using
u = [[summation].sup.c.sub.i=1] [[mu].sub.i] [y.sub.i][[summation].sup.c.sub.i=1][[mu].sub.i]. (10)
3. Review of Teaching Learning Based Optimization (TLBO) Algorithm
Teaching learning based optimization algorithm (TLBO) is a new optimization algorithm proposed by Rao et al. in 2011 . This algorithm is inspired by learners receiving teaching from the teacher in a class. The teaching process is done either by means of the teacher who is considered to be a highly learned person and has a great influence on the output of students (teacher phase) or through the interaction among learners (learner phase) [20, 21, 24, 35-37].
TLBO is a population-based algorithm in which the population is considered as a class of n learners. Each learner represents a solution and the dimension of each solution that is considered as different subjects offered to the learners actually represents the parameters involved in the objective function of the given optimization problem. The evaluation of each solution (fitness value) of the optimization problem represents the students' grades. The best fitness value is considered to be the teacher [36, 38-40].
The main characteristic of this algorithm is that it does not contain any specific parameters; it includes common parameters only [27, 41]. The implementation of a TLBO algorithm will be explained in the following subsection [24, 38].
(1) Define the optimization problem and initialize the common parameters, which are population size (ps) that represents the number of learners (n) and the dimension of each learner (D), which represents the subjects offered to the learners. In addition, set the value of the maximum number of iterations and the values of the constraints variables (lb, ub), which denote lower and upper boundaries, respectively.
(2) Generate the initial population randomly with (n) rows and (m) columns within [lb, ub] and then calculate the objective function value of each solution using f(x), where x is 1, 2, 3, ..., n. The results were sorted in an ascending order corresponding to ps (ascending order is convenient for finding minimum value; maximum value can be obtained by multiplying by -1 before the objective):
[mathematical expression not reproducible], (11)
where f(1) < f(2) ... < f(n - 1) < f(n). Therefore, the first learner [A.sup.1] = ([A.sup.1.sub.1] [A.sup.1.sub.2] ... [A.sup.1.sub.m]) is considered to be the best solution (teacher).
(3) In the teacher phase, calculate the mean of the population column wise where
[mathematical expression not reproducible]. (12)
(4) The teacher tries to improve the grade average of the students using
[A.sup.new,i] = [A.sup.i] + [r.sub.i] ([A.sup.1] - [T.sub.F][A.sup.mean]) (i = 1, 2, 3, ..., n), (13)
where [A.sup.new,i] represents the improved learners, [A.sup.i] represents the current learners, [r.sub.i] is a random number in the interval [0,1], [A.sup.1] is the desired mean, [A.sup.mean] is the current mean , and [T.sub.F] is a teaching factor that is not a parameter of the TLBO algorithm: it is calculated randomly using (14), which decides the value of the mean to be changed :
[T.sub.F] = round [1 + rand (1)]. (14)
In [A.sup.new,i], if the value of any variable is less than lb or bigger than ub, it is equal to lb or ub, respectively :
If / ([A.sup.new,i]) < f[(A).sup.i], then [A.sup.i] = [A.sup.new,i] else [A.sup.i] = [A.sup.i]. (15)
(5) In the learner phase, a learner interacts randomly with other learners to enhance his or her knowledge.
(6) Randomly select two learners and [A.sup.i] and [A.sup.j] where i = j:
[A.sup.new,i] = A + [r.sub.i] ([A.sup.i] - [A.sup.j]) if f([A.sup.i]) < f([A.sup.j]) [A.sup.new,i] = [A.sup.i] + [r.sub.i] ([A.sup.j] - [A.sup.i]) if / ([A.sup.j]) < f ([A.sup.i]). (6)
In [A.sup.new,i], if the value of any variable is less than lb or bigger than ub, it is equal to lb or ub, respectively:
If f ([A.sup.new,i]) < f[(A).sup.i], then A = [A.sup.new,i] else [A.sup.i] = [A.sup.i]. (17)
(7) The duplicate solutions are modified in order to avoid trapping in the local optima by using mutation process on randomly selected dimensions of the duplicate solutions before executing the next iteration.
(8) Sort the results in an ascending order corresponding to ps.
(9) Repeat processes (3) to (5) until the termination condition is satisfied.
4. The Proposed Method of TLBO-FWNN
To increase the accuracy of the diagnosis process of heart disease using FWNN, one of the robust optimization algorithms should be used for FWNN training. Thus, the methodology of this study includes two important procedures. The first one is constructing the structure of FWNN for the heart dataset that will be used in both the training and testing phases. The second procedure represents the training process for the constructed FWNN by utilizing the TLBO algorithm. For conducting the training process, a sample of data related to heart disease, which is called training data, is used as the input variables to the FWNN. Then, the mean square error (MSE) is calculated, which represents the difference between the actual output and the desired output of the FWNN. MSE is computed using
[E.sub.p] = 1/L [L.summation over (i=1)][([u.sup.d.sub.l] - [u.sub.l]).sup.2], (18)
where 1 [less than or equal to] l [less than or equal to] L represents the number of input patterns, 1 [less than or equal to] p [less than or equal to] P refers to the iteration number, [u.sup.d.sub.l] represents desired output, and [u.sub.l] represents the actual output of the FWNN.
The MSE represents the objective function of the TLBO algorithm, which is used to calculate the objective function value of each individual in the population. The population in this algorithm is represented by a set of solutions; each solution refers to one learner and that solution has a number of values that indicate the number of parameters (subjects) to be updated in the FWNN. In this study, the parameters are linkage weights in the FWNN and the wavelet parameters (dilation and translation). The value of these parameters is initialized randomly. Then, these values are updated using the TLBO algorithm to obtain optimal values with the minimum error rate and the highest classification accuracy.
In the testing phase, the optimal values obtained from the training phase with the testing data will be used as the input variables to test the FWNN trained by the TLBO algorithm. The output of the FWNN is calculated and compared with the desired output to investigate the learning ability of the FWNN to classify the heart dataset.
The main steps of training the FWNN using the TLBO algorithm are as follows.
(1) Initialize randomly the values of each learner (i.e., weights, dilation parameters, and translation parameters) within the interval [-1, 1], which represents the lower and upper boundaries, respectively. Then, initialize the common parameters of the TLBO algorithm, which are population size, maximum iteration number, and the dimension.
(2) Put Cycle = 1.
(3) Evaluate each learner by calculating its objective function value based on the FWNN, which gives the error rate of each learner.
(4) Update the weight, dilation, and translation parameters using the TLBO algorithm.
(5) Keep the best learner, which represents the teacher (the best values of weights, dilation, and translation parameters).
(6) Put Cycle = Cycle + 1.
(7) Repeat steps (3) to (6) until the maximum iteration number is reached. Figure 3 represents the flowchart of the TLBO-FWNN method.
5. The Heart Disease Dataset
The Cleveland heart disease dataset was obtained from the Cleveland Clinic Foundation, collected by Robert Detrano. This data was used to predict the presence or the absence of heart disease. It includes 303 instances, but only 297 instances of them were used in this study because 6 instances were missing some of their attributes. The heart dataset contains 160 normal instances and 137 abnormal instances. Each instance has 76 attributes but all published experiments prefer to use 14 of them. Tables 1 and 2 briefly illustrate the properties of these attributes .
6. K-Fold Cross Validation
During the training process of a neural network, the use of K-fold cross validation makes the results of the testing process more reliable  because it guarantees that all data is used for training and testing. In K-fold cross validation, the data is randomly divided into k parts called folds, where each fold is equal to another. Among the K folds, one fold will be selected for testing and the K - 1 folds will be used for training. This process is repeated K times. Finally, all testing results are averaged to produce a single estimation result [45,46]. In this study, fivefold cross validation is used.
7. Experimental Results and Discussion
In this study, the performance of the proposed system for diagnosing the presence (1) or the absence (0) of heart disease is investigated using a common heart dataset (Cleveland heart disease dataset). We measured error rate, classification rate, and the time taken. The heart dataset is divided into 5 folds: one fold is used for testing and the other 4 folds are used for training. Each fold has 60 instances and each instance has 13 attributes, as shown in Table 1. This process is repeated 5 times. As mentioned previously, the TLBO algorithm has common parameters, which are population size (ps) and the dimension of each solution (D). In this research, the value of the D parameter is equal to 81, which represents the weight, dilation, and translation parameters. However, the value of the ps parameter is varied because the user of this algorithm does not have adequate knowledge about the appropriate value of this parameter. In this study, the training process is repeated in three separate experiments with three various ps, which are 50, 100, or 150. The maximum iteration number represents the stopping condition, set to 500. In FWNN, the classification of the heart dataset is based on the number of fuzzy sets, fuzzy rules, and wavelet functions, which are equal to three, in addition to the value of each attribute.
In terms of the error rate, which represents the percentage of incorrect classifications for training the FWNN using the TLBO algorithm for the Cleveland heart disease dataset, it is illustrated in Table 3. In this table, the results have shown that the TLBO-FWNN reached the minimum error rate (0.0585) when the population size was equal to 150.
Moreover, we noticed from Table 4, which represents the correct classification percentage, that the best classification accuracy is obtained when the population size is equal to 150 (94.1422). Increasing the population size leads to an increase in the training duration, as shown in Table 5, where the average time of training is 138.31 minutes when the population was equal to 150; the average time is 45.39 when the population size was equal to 50. Also, the classification accuracy and the error rate are very close to each other for both population sizes (100 and 150), as shown in Tables 3 and 4.
Tables 6 and 7 illustrate the MSE and the classification accuracy of testing the FWNN for unseen heart data by using the optimal parameter values obtained from the TLBO algorithm, respectively. In Table 6, the minimum average error rate of 0.0970 is obtained when the population size is equal to 100. In addition, the highest average classification rate (90.2909) is acquired when the population size is equal to 100.
Moreover, the input variables of the FNN are automatically normalized during the fuzzification process. So, the input variables of the WNN should be normalized too to be more homogenous with the input variables of the FNN. The data normalization process is done by finding the maximum value of each column of input variables and then dividing each value in that column on this maximum value.
The error rate of training the FWNN utilizing the TLBO algorithm for the normalized heart disease dataset is illustrated in Table 8. In this table, the results have shown that the TLBO-FWNN reached the minimum error rate when the population size was equal to 100, which is 0.0489 in all experiments.
In addition, as shown in Table 9, which represents the correct classification percentage, the highest average classification accuracy (95.0964) is obtained when the population size equals 100.
The average time taken was equal to 88.498 minutes only when the population size was equal to 100, as shown in Table 10.
Thus, Tables 11 and 12 explain the MSE and classification accuracy when testing the FWNN for normalized unseen heart data, respectively. In Table 11, the lower average error testing rate (0.0997) is obtained when the population size is equal to 50. In Table 12, the highest average classification rate (90.0213) is acquired when the population size was equal to 50.
In conclusion, in comparing the maximum classification rate acquired during the testing of the FWNN on normalized and nonnormalized data, the results have shown that the classification accuracy on nonnormalized data (90.2909) is better than the classification accuracy on normalized data (90.0213).
Also, to investigate the performance of the proposed TLBO-FWNN method, a comparison with eight recently proposed methods from the literature was carried out on the same dataset.
As shown in Table 13 and Figure 4, the results show that the proposed method, TLBO-FWNN, has the best performance for diagnosing heart disease in terms of classification accuracy (90.2909) compared to the results obtained by other methods. The GSA+MLP method was the worst of the others.
8. Conclusion and Future Work
In this paper, a hybrid fuzzy wavelet neural network and teaching learning based optimization algorithm were used to classify the presence or the absence of heart disease. The teaching learning based optimization (TLBO) algorithm has been proposed for training fuzzy wavelet neural networks (FWNN). The simulation results have shown that when population is of a medium size (100), TLBO-FWNN gives good results in a somewhat short amount of time. In addition, these results demonstrate that the TLBO-FWNN method has superior performance compared to other published methods, giving the highest classification accuracy.
In addition, there are some suggestions that can be applied to enhance the performance of the TLBO-FWNN method in the future, such as using a TLBO algorithm to evolve the structure of the FWNN or utilizing another optimization algorithm to enhance the learning of the FWNN.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
 K. Polat, S. Gunes, and S. Tosun, "Diagnosis of heart disease using artificial immune recognition system and fuzzy weighted pre-processing," Pattern Recognition, vol. 39, no. 11, pp. 2186-2193, 2006.
 P. Gayathri and N. Jaisankar, "Comprehensive study of heart disease diagnosis using data mining and soft computing techniques," International Journal of Engineering & Technology, vol. 5, no. 3, pp. 2947-2958, 2013.
 R. Das, I. Turkoglu, and A. Sengur, "Effective diagnosis of heart disease through neural networks ensembles," Expert Systems with Applications, vol. 36, no. 4, pp. 7675-7680, 2009.
 centers of disease control and prevention, 2014, http://www.cdc .gov/heartdisease/facts.htm.
 H. Kahramanli and N. Allahverdi, "Design of a hybrid system for the diabetes and heart diseases," Expert Systems with Applications, vol. 35, no. 1-2, pp. 82-89, 2008.
 A. Khemphila and V. Boonjing, "Heart disease classification using neural network and feature selection," in Proceedings of the 21st International Conference on Systems Engineering (ICSEng '11), pp. 406-409, IEEE Computer Society, Las Vegas, Nev, USA, August 2011.
 Z. Beheshti, S. M. H. Shamsuddin, E. Beheshti, and S. S. Yuhaniz, "Enhancement of artificial neural network learning using centripetal accelerated particle swarm optimization for medical diseases diagnosis," Soft Computing, pp. 1-18, 2013.
 Y. Wang, T. Mai, and J. Mao, "Adaptive motion/force control strategy for non-holonomic mobile manipulator robot using recurrent fuzzy wavelet neural networks," Engineering Applications of Artificial Intelligence, vol. 34, pp. 137-153, 2014.
 E. Lughofer, "On-line assurance of interpretability criteria in evolving fuzzy systems--achievements, new concepts and open issues," Information Sciences, vol. 251, pp. 22-46, 2013.
 S. Tzeng, "Design of fuzzy wavelet neural networks using the GA approach for function approximation and system identification," Fuzzy Sets and Systems, vol. 161, no. 19, pp. 2585-2596, 2010.
 E. Karatepe and T. Hiyama, "Fuzzy wavelet network identification of optimum operating point of non-crystalline silicon solar cells," Computers and Mathematics with Applications, vol. 63, no. 1, pp. 68-82, 2012.
 V. S. Kodogiannis, M. Amina, and I. Petrounias, "A clustering-based fuzzy wavelet neural network model for short-term load forecasting," International Journal of Neural Systems, vol. 23, no. 5, Article ID 1350024, 2013.
 M. Shahriari Kahkeshi, F. Sheikholeslam, and M. Zekri, "Design of adaptive fuzzy wavelet neural sliding mode controller for uncertain nonlinear systems," ISA Transactions, vol. 52, no. 3, pp. 342-350, 2013.
 Y. Bodyanskiy and O. Vynokurova, "Hybrid adaptive wavelet-neuro-fuzzy system for chaotic time series identification," Information Sciences, vol. 220, pp. 170-179, 2013.
 C. M. Lin, A. B. Ting, C. F. Hsu, and C. M. Chung, "Adaptive control for mimo uncertain nonlinear systems using recurrent wavelet neural network," International Journal of Neural Systems, vol. 22, no. 1, pp. 37-50, 2012.
 D. W. C. Ho, P.-A. Zhang, and J. Xu, "Fuzzy wavelet networks for function learning," IEEE Transactions on Fuzzy Systems, vol. 9, no. 1, pp. 200-211, 2001.
 M. Davanipoor, M. Zekri, and F. Sheikholeslam, "Fuzzy wavelet neural network with an accelerated hybrid learning algorithm," IEEE Transactions on Fuzzy Systems, vol. 20, no. 3, pp. 463-470, 2012.
 S. C. Satapathy, A. Naik, and K. Parvathi, "Teaching learning based optimization for neural networks learning enhancement," in Swarm, Evolutionary, and Memetic Computing, pp. 761-769, Springer, New York, NY, USA, 2012.
 J. Salah Aldeen and R. A. Wahid, "A comparative study among some natural-inspired optimization algorithms," Journal of Education and Science. In press.
 R. Rao and G. Waghmare, "Solving composite test functions using teaching-learning-based optimization algorithm," in Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA '13), vol. 199, pp. 395-403, Springer, 2013.
 R. V. Rao, V. J. Savsani, and D. P. Vakharia, "Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems," Information Sciences, vol. 183, pp. 1-15, 2012.
 J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942-1948, Western Australia, Australia, December 1995.
 R. Storn and K. Price, "Differential evolution--a simple and efficient heuristic for global optimization over continuous spaces," Journal of Global Optimization, vol. 11, no. 4, pp. 341-359, 1997.
 R. V. Rao, V. J. Savsani, and D. P. Vakharia, "Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems," Computer Aided Design, vol. 43, no. 3, pp. 303-315, 2011.
 D. Karaboga, "An idea based on honey bee swarm for numerical optimization," Tech. Rep., Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey, 2005.
 X.-S. Yang, "Firefly algorithms for multimodal optimization," in Stochastic Algorithms: Foundations and Applications, pp. 169-178, Springer, New York, NY, USA, 2009.
 E. Uzlu, M. Kanka, A. Akpinar, T. Dede, and A. Akpinar, "Estimates of energy consumption in Turkey using neural networks with the teachinglearning-based optimization algorithm," Energy, 2014.
 H. Malek, M. M. Ebadzadeh, and M. Rahmati, "Three new fuzzy neural networks learning algorithms based on clustering, training error and genetic algorithm," Applied Intelligence, vol. 37, no. 2, pp. 280-289, 2012.
 L. A. Zadeh, "Fuzzy sets," Information and Computation, vol. 8, pp. 338-353, 1965.
 M. M. Gupta and D. H. Rao, "On the principles of fuzzy neural networks," Fuzzy Sets and Systems, vol. 61, no. 1, pp. 1-18, 1994.
 T. Hassanzadeh, K. Faez, and G. Seyfi, "A speech recognition system based on Structure Equivalent Fuzzy Neural Network trained by Firefly algorithm," in Proceedings of the International Conference on Biomedical Engineering (ICoBE '12), pp. 63-67, IEEE, Penang, Malaysia, February 2012.
 A. K. Alexandridis and A. D. Zapranis, "Wavelet neural networks: a practical guide," Neural Networks, vol. 42, pp. 1-27, 2013.
 R. H. Abiyev and O. Kaynak, "Fuzzy wavelet neural networks for identification and control of dynamic plants--a novel structure and a comparative study," IEEE Transactions on Industrial Electronics, vol. 55, no. 8, pp. 3133-3140, 2008.
 R. H. Abiyev and O. Kaynak, "Identification and control of dynamic plants using fuzzy wavelet neural networks," in Proceedings of the IEEE International Symposium on Intelligent Control (ISIC '08), pp. 1295-1301, September 2008.
 V. Togan, "Design of planar steel frames using teaching-learning based optimization," Engineering Structures, vol. 34, pp. 225-232, 2012.
 A. Baykasoglu, A. Hamzadayi, and S. Y. Kose, "Testing the performance of teaching-learning based optimization (TLBO) algorithm on combinatorial problems: flow shop and job shop scheduling cases," Information Sciences. An International Journal, vol. 276, pp. 204-218, 2014.
 R. V. Rao and G. Waghmare, "A comparative study of a teachinglearning-based optimization algorithm on multiobjective unconstrained and constrained functions," Journal of King Saud University--Computer and Information Sciences, 2013.
 W. Cheng, F. Liu, and L. Li, "Size and geometry optimization of Trusses using teaching-learning-based optimization," International Journal of Optimization in Civil Engineering, vol. 3, no. 3, pp. 431-444, 2013.
 R. V. Rao, V J. Savsani, and J. Balic, "Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems," Engineering Optimization, vol. 44, no. 12, pp. 1447-1462, 2012.
 R. V. Rao, V. D. Kalyankar, and G. Waghmare, "Parameters optimization of selected casting processes using teaching-learning-based optimization algorithm," Applied Mathematical Modelling, 2014.
 G. Waghmare, "Comments on "a note on teaching-learning-based optimization algorithm"," Information Sciences, vol. 229, pp. 159-169, 2013.
 P. V Babu, S. C. Satapathy, M. K. Samantula, P. K. Patra, and B. N. Biswal, "Teaching learning based optimized mathematical model for data classification problems," in Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA '13), pp. 487-496, Springer, 2013.
 R. Venkata Rao and V. D. Kalyankar, "Parameter optimization of modern machining processes using teaching-learning-based optimization algorithm," Engineering Applications of Artificial Intelligence, vol. 26, no. 1, pp. 524-531, 2013.
 D. Robert, "V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.DDonor: David W. Aha," vol. 714, pp. 856-879, 1988.
 N. A. Baykan and N. Yilmaz, "A mineral classification system with multiple artificial neural network using k-fold cross validation," Mathematical and Computational Applications, vol. 16, no. 1, pp. 22-30, 2011.
 S. T. Ishikawa and V. C. Gulick, "An automated mineral classifier using Raman spectra," Computers and Geosciences, vol. 54, pp. 259-268, 2013.
Jamal Salahaldeen Majeed Alneamy and Rahma Abdulwahid Hameed Alnaish
Software Engineering Department, Computer and Mathematics Science College, University of Mosul, Mosul, Iraq
Correspondence should be addressed to Rahma Abdulwahid Hameed Alnaish; email@example.com
Received 16 May 2014; Revised 29 August 2014; Accepted 31 August 2014; Published 17 September 2014
Academic Editor: Chao-Ton Su
Caption: FIGURE 1: The general structure of the fuzzy wavelet neural network.
Caption: FIGURE 2: The structure of the wavelet neural network.
Caption: FIGURE 3: The flowchart of the TLBO-FWNN method.
Caption: FIGURE 4: The classification accuracy for the proposed and existing methods.
TABLE 1: Properties of the input heart dataset's attributes. No. The name of input variables Range of the value 1 Age 29-77 (years) 2 Sex 1 (male), 0 (female) 3 Chest pain type (cp) 1,2,3, or 4 4 Resting blood pressure 94-200 (mm Hg) (trestbps) 5 Serum cholesterol (chol) 126-564 (mg/dl) 6 Fasting blood sugar >120 mg/ 1 (true), 0 (false) dl (fbs) 0: normal 7 Resting electrocardiographic 1: having ST-T wave results (restecg) abnormality 2: showing probable or definite left ventricular hypertrophy 8 Maximum heart rate achieved 71-202 (thalach) 9 Exercise induced angina 1 (yes), 0 (no) (exang) 10 ST depression induced by 0-6.2 exercise relative to rest (Oldpeak) 1: upsloping 11 The slope of the peak exercise 2: flat ST segment 3: downsloping 12 Number of major vessels (0-3) 0, 1, 2, or 3 colored by fluoroscopy 3: normal 13 Thal 6: fixed defect 7: reversible defect TABLE 2: Properties of the output heart dataset's attributes. The type of the output The value of the output (diagnosis of heart disease) Abnormal 1 (Probability of diameter narrowing >50%) Normal 0 (Probability of diameter narrowing <50%) TABLE 3: Error rate of training the FWNN using TLBO on the heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 0.06101 0.0652 0.0632 0.0680 0.0633 0.0641 100 0.0569 0.0628 0.0547 0.0584 0.0623 0.0590 150 0.0567 0.0613 0.0558 0.0577 0.0611 0.0585 TABLE 4: Classification rate of training the FWNN using the TLBO on the heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 93.8987 93.4777 93.6709 93.1956 93.6664 93.5818 100 94.3064 93.7180 94.5262 94.1528 93.7668 94.0940 150 94.3238 93.8600 94.4169 94.2263 93.8843 94.1422 TABLE 5: Time taken for training the FWNN using the TLBO on the heart dataset Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 47 43.75 45.95 47.82 42.43 45.39 100 99 87.96 87 91.41 85.30 90.13 150 131.84 140.77 152.34 134.54 132.09 138.31 TABLE 6: Error rate of testing the FWNN on the heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 0.1934 0.0617 0.0848 0.0769 0.0838 0.1004 100 0.184 0.0673 0.0899 0.0787 0.0654 0.0970 150 0.1853 0.0624 0.0778 0.0782 0.0858 0.0979 TABLE 7: Classification rate of testing the FWNN on the heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 80.6578 93.8314 91.5186 92.3071 91.6199 89.9869 100 81.5955 93.2665 91.0059 92.127 93.4597 90.2909 150 81.468 93.7626 92.2201 92.1787 91.4164 90.2091 TABLE 8: Error rate of training the FWNN using the TLBO on the normalized heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 0.0484 0.0522 0.0561 0.0485 0.0543 0.0519 100 0.0476 0.0519 0.0510 0.0481 0.0463 0.0489 150 0.0440 0.0526 0.0522 0.049 0.0513 0.0498 TABLE 9: Classification rate of training the FWNN using the TLBO on the normalized heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 95.1504 94.7733 94.3811 95.1457 94.5624 94.8025 100 95.2379 94.8006 94.8931 95.1891 95.3613 95.0964 150 95.5908 94.7341 94.7724 95.0860 94.8696 95.0105 TABLE 10: Duration time for training the FWNN using the TLBO on the normalized heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 43.36 48.21 45.03 44.29 43.72 44.922 100 92.17 87.16 88.27 87.65 87.24 88.498 150 144.40 148.67 140.18 132.80 130.37 139.284 TABLE 11: Error rate of testing the FWNN on the normalized heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 0.2101 0.0593 0.084 0.0677 0.0778 0.0997 100 0.1898 0.0656 0.0857 0.0764 0.0876 0.1010 150 0.208 0.068 0.071 0.0947 0.0778 0.1039 TABLE 12: Classification rate of testing the FWNN on the normalized heart dataset. Population 1st 2nd 3rd 4th 5th Mean size time time time time time 50 78.9873 94.0694 91.5996 93.2322 92.2183 90.0213 100 81.0236 93.4444 91.434 92.3587 91.2441 89.9009 150 79.195 93.2005 92.8955 90.5341 92.2201 89.6090 TABLE 13: A comparison of the proposed and existing methods based on classification accuracy. Method Classification accuracy Proposed method (TLBO + FWNN) 90.2909 neural networks ensembles 89.01 CAPSO + MLP 87.04 ANN + FNN + BP 86.8 MLP + BP + GA 80.99 MLP + BP 80.17 PSO + MLP 74.07 ICA + MLP 68.52 GSA + MLP 61.11 Method Reference Year Proposed method (TLBO + FWNN) -- -- neural networks ensembles Das et al.  2009 CAPSO + MLP Beheshti et al.  2013 ANN + FNN + BP Kahramanli and Allahverdi  2008 MLP + BP + GA Khemphila and Boonjing  2011 MLP + BP Khemphila and Boonjing  2011 PSO + MLP Beheshti et al.  2013 ICA + MLP Beheshti et al.  2013 GSA + MLP Beheshti et al.  2013
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Research Article|
|Author:||Alneamy, Jamal Salahaldeen Majeed; Alnaish, Rahma Abdulwahid Hameed|
|Publication:||Advances in Artificial Neural Systems|
|Date:||Jan 1, 2014|
|Previous Article:||Exponential Stability of Periodic Solution to Wilson-Cowan Networks with Time-Varying Delays on Time Scales.|