Printer Friendly

A proposal for cardiac arrhythmia classification using complexity measures.

I. INTRODUCTION

The increasing numbers of persons that suffer from cardiovascular disease are one of the major problems in medicine [1]. There are many methods that detect and/or classify the arrhythmias, including linear/nonlinear methods by Heart Rate Variability (HRV), spectral analysis, fractals, wavelets, support vector machine (SVM), neural networks, fuzzy clustering techniques, and decision trees [2]. Most of the algorithms are tested using the most common benchmark, MIT-BIH Arrhythmia Database, but there are also an important number of papers that proposed algorithms tested using customized records that are not often accessible to other researches.

Most common, arrhythmia (ARH) is defined as an irregular heartbeat or abnormal heart rhythm. In [1] the authors proposed a knowledge-based method using arrhythmic episode detection and six rhythm types are identified. The algorithm develops many rules and detection and classification are made by a finite automaton, at 98% accuracy for ARH classification and 94% accuracy for ARH classification and detection for a set from MIT-BIH Arrhythmia Database.

In [3], high order spectra parameters (in particular bispectrum) were proposed as method to identify five classes of arrhythmias from normal spectrum with good results. Intensive research was performed in [4]. A CAD (Computer Aided Diagnosis) to classify five heartbeat types: one normal and others are Premature Ventricular Contraction (PVC), Premature Atrial Contraction (APC), Left Bundle Branch Block (LBBB) and Right Bundle

Branch Block (RBBB). A kernel is used to make the classes linearly separable for proposed classifier (SVM--Support Vector Machine) [4]. SVM is also used in [5] where fifteen features are selected from HRV (Heart Rate Variability) analysis and using GDA (Generalized Discriminant Analysis) the vector of features is reduced to five elements. The six different types of arrhythmia classes are classified with accuracy between 98.94% and 100%.

Systems based on fuzzy logic are often used in clustering and classification offering the advantage of qualitative reasoning with extraction of quantitative results. A new fuzzy clustering neural network architecture is proposed to classify ten types of arrhythmias in [6] and another complex combination of K-Nearest Neighbors, neural networks and a fuzzy system is described in [7], for four types of arrhythmia (and one normal rhythm).

Neural networks (NN) in most used form as multilayer perceptron but also other sophisticated architectures are used in different papers ([8],[9]). Fifteen classes are used for NN with LVQ (Learning Vector Quantization) algorithm in [8]. Heartbeat detection and using wavelet NN via Morlet wavelets and Probabilistic Neural Networks are proposed to recognize seven common arrhythmias in time domain [10].

Other proposal were focused on less traditional methods as Ant Colony Optimization [11], even this method is more suitable for minimizations in graph theory, personalized decision tree [12] and optimum-path forest [13]. An interesting approach is proposed in [14], where a onedimensional vector, fractal dimension, can be used to classify arrhythmias.

Very few papers proposed to use a complexity measure applied to time series to identify heartbeat pulse patterns or other patterns that belong to biomedical signal class. In [15], Lempel-Ziv algorithm was used in identification of patterns (seven types) from traditional Chinese pulse diagnostic (TPCD). The classification algorithm is very complex, with many heuristic rules and has difficulties to be implemented. The specificity and sensitivity can reach 100% [15].

The rest of the paper is organized as follows. In section 2 the ECG preprocessing is presented. Three algorithms for complexity measures are proposed to be used on RR intervals of ECG in section 3. In section 4, the cloud computing application is described. Section 5 presents experimental results followed by conclusions and future works.

II. ECG PREPROCESSING

There are two main sources of possible errors in ARH detection/classification: outliers and ectopic beats. The ECG signals are usually filtered by hardware methods and the basic notch filter for 50 Hz frequency is used to remove the power line interference.

The RR peaks from ECG records have been detected using the well-known Pan-Tompkins algorithm [16], the RR intervals are extracted and all the records are linked together thereafter.

In preprocessing stage, the signal is filtered from ectopic beats that are often the primary source of atrial fibrillation (AF) events, but in some particular cases these can be an important and valuable source for medical diagnosis. A tradeoff between advantages and disadvantages offered by ectopic beats conducts that in our situation it is better to remove them. The ectopic rhythm (ectopic beats) are defined as small irregular changes in a normal heart rhythm that are due to some premature or extra heartbeats. Three of the most common ectopic beats are premature ventricular contraction (PVC), premature atrial contraction (PAC) and extrasystole.

In order to filter the ectopic beats we used the method proposed in [17], using RR records. The premature beats are recognized by a short-long sequence that is, a premature and a compensatory pause. A ratio RR[i]/RR[i+1] is calculated (where i is the index) and Percl, Perc25 and percentiles are used to make the difference between ectopic beats and small fluctuations in heart rhythm due to physiologic variability [17]. The identified RR segments as ectopic beats are removed.

The second problem in RRs recordings is the presence of outliers [18]: abnormal values that are greater than the normal RR range of values. There are two main methods: (a) the outliers are deleted from sequence; (b) the outliers are replaced by a value that is calculated using the left and right value nearest to outlier. The second method is preferable to reduce the possibility to introduce high frequencies in signal reconstruction (aliasing phenomenon). After detection of outliers (value of RR is greater than a threshold, empirically chosen), these values are replaced by average value of outlier border.

III. COMPLEXITY MEASURES

A. Sample entropy

Sample entropy (SampEn) is a measure of complexity; it is different from common approximate entropy (ApEn) and offers the advantage that self-similar patterns are not counted. sampEn can be used as a measure of complexity for discrete time physiological signals ([19], [20]).

Let's denote byX = ([x.sub.1], [x.sub.2], ..., [x.sub.n]) a discrete time series, then a template vector of length m as Xkm= ([x.sub.k+1], [x.sub.k+2], ..., [x.sub.k+m]), a sub-vector of X, and r a tolerance value. Sample entropy is the probability that having a distance between a set of two sub-strings lower than r, to have also a subset of sub-strings length m+1 that have a distance between them lower than r. The matching is usually based on a metric distance (usually the Chebyshev distance-dc) but other metrics (Euclidean, Minkowski, etc.) can be also used.

SampEn is the negative value of natural logarithm of the ration of two numbers A (vector pairs of length m+1) and B (vector pairs of length m) [20]:

SampEn = -logA/B (1)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

dc([X.sub.i,m],[X.sub.j,m]) < r (3)

The main parameters passed to routine that calculates sample Entropy (SampEn) are length of window and r. Usually values are set for m=2 and r = 2 x std (standard deviation), and in our case r = 6. Depending on threshold value for outliers, this value can be a modified one (greater or less than 6).

B. Lempel-Ziv algorithm

The randomness of a finite discrete time series can be evaluated by Lempel-Ziv complexity analysis ([21], [22]). Lempel-Ziv algorithm calculates the number of distinct patterns for a given time series sequence of length n named the complexity counter c(n).

As in all the complexity algorithms, the first step in preprocessing stage is the transformation of numerical sequence (numerical series or a window from this series) into a symbolic sequence based on an alphabet of symbols [22]. The most popular set of symbols is a binary one, '0' and '1', a single level of discrimination based on threshold [S.sub.d] : if the value of signal is lower than [S.sub.d], the corresponding symbol is '0', otherwise, the corresponding symbol is '1'. A good choice for [S.sub.d] is the median value (or average value) of the entire time series from where are extracted the sequences. The corresponding symbolic sequence obtained after coding is thereafter parsed from left to right in order to count distinct words. The approach is consistent with older studied published in [23].

The implementation of Lempel-Ziv algorithm is similar with the approach in [21]. Let's denote by S = [s.sub.1][s.sub.2] ... sn a finite length of symbols, and S(i,j)=[s.sub.i][s.sub.i+1] ...[s.sub.j] [member of] S, i<j is one substring that starts at i point and end at position j. The null set is denoted by S(i,j)={}. Let Q and Rbe substrings of type S(i,j), QR the concatenation of them and let's denote by QRD a sequence obtained after the last character of QR is deleted (D is the common notation for deleted). In a scan procedure, the string s is parsed from left to right in order to obtain distinct strings. Let's denote by B(S) the set of basic words. A substring S(i,j) is compared with the strings from B(S), substrings up to j-1, that is S(i, j-1). If S(i, j) is present, no new component is present, B(S(i,j-1)) is updated to B(S(i,j)), and S(i,j) is updated to S(i, j+1), and the process is repeated until the end of string. If S(i, j) is not present, a new component is found and a dot is placed after S(j). The dots mark all the new distinct strings found in the parsing process. The process repeats until j=n (n is the length of string S). The start process begins with S(1,1), the first symbol in the string S.

The normalized complexity with Lempel- Ziv measure for a single level threshold and for a sequence of length n is computed with:

C (n) = c(n)/(c(n)/[log.sub.2](n) (4)

For example, the string 0111010100010 is parsed as 0 1 11 010 100 010 and the complexity measure is calculated for c(n) = 5 and n =13, so C(n)=1.7079.

C. T-Code

The computation of T-complexity is approached by using T-codes, a novel method of self-synchronizing code that has been proposed by Tichener [24] and presented in detail in [25]. The usage of T-code for randomness is present in only one paper [26] and few variation of the content in another paper, by the same authors.

One alphabet with a set of symbols is used to construct the first level of augmentation (an essential term used in Tcode). This set of symbols is memorized in this first level. The next levels of augmentation are deployed iteratively by removing a chosen T-prefix and appending it to the list of codewords [25]. T-decomposition is an inverse operation and it assays the sequence of T-prefixes having as result the total distinct prefixes with which the complexity is computed.

The T-code applies to substrings by means of a window W of finite length. Let A be an alphabet set with finite length, uw-the merging of two strings u and w, and uk - the merging of k copies of string u. The recursive formula [25]:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

can build the series of T-code, where [S.sub.0] = A, [P.sub.j] is a prefix, [P.sub.j] is a string, [k.sub.j] is a natural number and s is also a string. In tree theory it is usually noted by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] If we have a binary alphabet, A = {0,1}, for instance a T-augmentation presented as tree is shown in Figure 1. By means of the algorithm named T-decomposition, for a certain string the sequence is parsed as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

tc = [n.summation over (j=1)][log.sub.2]([k.sub.j] + 1)

The T-complexity of the string s is calculated by formula (7).

D. Cloud computing PaaS

Cloud computing is a paradigm based on Internet, to provide shared computing and data resources, at demand. The cloud client accesses the cloud resources via Internet.

The basic service models are:

* Software as a service (SaaS);

* Platform as a service (PaaS);

* Infrastructure as a service (IaaS).

A relatively new model is Mobile backend as a service (MBaaS), that represents the access of consumer from a mobile device via application programming interfaces (APIs).

The computational effort to calculate complexity values for large raw of RR data in an optimal short time asks increased computational power. Different from classic approach when cloud is used for storage and retrieve data ([28], [29]), we propose to use the cloud for code development, deployment and run the application (including .dll modules).

We used a seamless solution: .NET application in C# using Visual Studio 2013 Community, Microsoft Azure for code deployment and Azure Compute Emulator for debugging and test the application. In this stage, the application sends alert only to physician (via mobile device as an SMS), but a web page that permits to retrieve the patient's ECG via server is in progress (Figure 2).

IV. EXPERIMENTAL RESULTS

We used normal and arrhythmia records from MIT-BIH arrhythmia database, in total 48 files and 109236 recordings. In preprocessing stage 47 recordings were deleted after ectopic beats identification and 13 classes remained for arrhythmia classification: [C.sub.ARH] = {'!', '/', 'A', 'E', 'F', 'J', 'L', 'N', 'Q', 'R', 'S', 'V', 'a', 'e', 'f', 'j'}.

Three types of complexity are investigated: sample entropy (the most known), single and multilevel Lempel-Ziv complexity and a less known one, T-Code. The authors propose a novel usage of T-Code, as index for arrhythmia detection on ECG database.

Before extracting RR intervals the ECG signal is preprocessed. The ectopic beats were removed using a technique similar to the method detailed in [17], and the outliers were identified and removed using a modified technique similar to that described in [18]. Based on statistical analysis (confidence = 99%), if an absolute RR value is greater or equal to 1.75 R[R.sub.median] (the R[R.sub.median] is the median value over all the records), the value is assimilated to outliers and it is deleted, Figure 3.

The RR intervals are linked together in a very long vector of values. Different from [17], the ectopic beats are replaced by average values calculated from left and right border of individual ectopic beat.

The codification is made using a threshold dividing values of input RR time series in a coded sequence of '0' and '1'. A sliding window W (number of samples) of variable length is used to extract a sequence in order to compute that complexity. The domain values of RR time series are between [77, 2433] msec, with a threshold step of th =124 msec equally spaced. The window width is specified to W = {16, 24, 32, 40, 48, 56, 64, 128, 196, 256, 360, 400} samples, respectively. The greater is window, the better accuracy is supposed to be obtained. When the window length is very big, the computing effort is as sequel significantly bigger, as well as the computation time is increasing, and the efficiency of proposed solution also decreases. The main outcomes are presented in Table 1.

We used a binary tree classifier with complexity value as input (a single variable only) and the classes [C.sub.ARH] as output. We define the most performing tree (for a set of windows W) as a tree which has the highest classification rate and the smallest complexity structure (the tree that has the minimum number of nodes). In Figure 4 we represented a tree that accomplishes the performance criteria for T-code complexity analysis. The trees for other two methods (Lempel-Ziv and Sample Entropy) have different number of nodes but the performance is lower than in the T-code case.

The computational time is an important characteristic that has to be taken into account, thus for a practical application (using a microcontroller [35]), at this moment, the use of Tcode is not a feasible choice. We propose to transfer the calculus effort to a cloud computing platform in order to perform all calculus and provide the user with the decision if an ECG recording has a normal state (class W) or an arrhythmia class ([C.sub.ARH]).

The use of a fast classifier [36] will improve the computation time, thus increasing the efficiency of the proposed solution. The selection of parameters for signal processing affects the performance of the system. We especially refer to the length of window W and the threshold for binarization of RRs sequence, presented in Figures 5-6.

Modeling with fuzzy systems [37], [38] has proven to be very effective in various applications, being as effective or better, sometimes substantial as classical methods. So, it is possible that fuzzy S-trees and signatures applied to this classification problem increase the average classification rate of cardiac arrhythmias, but these approaches will be researched in the future.

Fuzzy entropy (FuzzyEn) [39] is another measure of entropy that could be used as measure of complexity for the analysis of time series. Different from SampEn, FuzzyEn is defined based on fuzzy similarity measure between two fuzzy membership functions and the shape of vectors, practically a measure of vagueness between two vectors. Our experiments used a Gaussian shape for membership function, dim = 2 (the embedding dimension), r = 0.2 (the width of fuzzy function-exponential) and n = 0.1 (the step of fuzzy function-exponential). The best result was an error of correct classification of 0.65%, but these are only preliminary results. We expect to improve this result by intensive experiments to find more suitable parameters (dim, r, n) and a more precise definition for complexity using FuzzyEn.

VI. CONCLUSION

We investigated the possibility of cardiac arrhythmia classification using a single discriminant value given by complexity analysis of codified RRs raw sequence. The results proved by two methods (Sample Entropy and Lempel-Ziv) are acceptable good in percent of correct classifications for 13 types of arrhythmias, meanwhile the use of T-code is by far the best version. In order to reduce the number of nodes in decision tree, an extended vector with other simple discriminants (easily computable) can be a solution. This aspect is the subject of future research.

The running of executable codes in cloud computing offers an important advantage for users. The update and modification of it is made in a transparent manner. Practically, the end user software is not modified and eventual improvements and developments are made in cloud. In our future development, a web interface will make the connection between physician and the patient's ECG records.

Digital Object Identifier10.4316/AECE.2017.03004

ACKNOWLEDGMENT

This work was supported by the project "Remote Monitoring of Physiological Parameters to Improve the Prediction of Fetal Outcome", financed by the Grigore T. Popa university of Medicine and Pharmacy, Iasi, Romania, contract No. 31592/23.12.2015.

REFERENCES

[1] S.S. Anand; S. Yusuf; "Stemming the global tsunami of cardiovascular disease", The Lancet, vol. 377, no. 9765, pp. 529-532, 2011. doi: 10.1016/S0140-6736(10)62346-X

[2] A. Ebrahimzadeh, A. Khazaee, "An efficient technique for classification of electrocardiogram signals", Advances in Electrical and Computer Engineering, vol. 9, no. 3, pp. 89-93, 2009. doi:10.4316/AECE.2009.03016

[3] A. Lanata, G. Valenza, C. Mancuso, E.P. Scilingo, "Robust multiple cardiac arrhythmia detection through bispectrum analysis", Expert Systems with Applications, vol. 38, pp. 6798-6804, 2011. doi:10.1016/j.eswa.2010.12.066

[4] A.F. Khalaf, M.I. Owis, I.A. Yassine, "A novel technique for cardiac arrhythmia classification using spectral correlation and support vector machines", Expert Systems with Applications, vol. 42, pp. 83618368, 2015. doi:10.1016/j.eswa.2015.06.046

[5] B.M. Asl, S.K. Setarehdan, M. Mohebbi, "Support vector machinebased arrhythmia classification using reduced features of heart rate variability signal", Artificial Intelligence in Medicine, vol. 44, pp. 5164, 2008. doi:10.1016/j.artmed.2008.04.007

[6] Y. Ozbaya, R. Ceylana, B. Karlikb, "Fuzzy clustering neural network architecture for classification of ECG arrhythmias", Computers in Biology and Medicine, vol. 36, pp. 376-388, 2006. doi:10.1016/j.compbiomed.2005.01.006

[7] O. Castillo, P. Melin, E. Ramirez, J. Soria, "Hybrid intelligent system for cardiac arrhythmia classification with Fuzzy K-Nearest Neighbors and neural networks combined with a fuzzy system", Expert Systems with Applications, vol. 39, pp. 2947-2955, 2012. doi:10.1016/j.eswa.2011.08.156

[8] P. Melin, J. Amezcua, F. Valdez, O. Castillo, "A new neural network model based on the LVQ algorithm for multi-class classification of arrhythmias", Information Sciences, vol. 279, pp. 483-497, 2014. doi:10.1016/j.ins.2014.04.003

[9] S. Osowski, T. Markiewicz, L.T. Hoai, "Recognition and classification system of arrhythmia using ensemble of neural networks", Measurement, vol. 41, pp. 610-617, 2008. doi:10.1016/j.measurement.2007.07.006

[10] C.-H. Lin, Y.-C. Du, T. Chen, "Adaptive wavelet network for multiple cardiac arrhythmias recognition", Expert Systems with Applications, vol. 34, pp. 2601-2611, 2008. doi:10.1016/j.eswa.2007.05.008

[11] M. Korurek, A. Nizam, "A new arrhythmia clustering technique based on Ant Colony Optimization", Journal of Biomedical Informatics, vol. 41, pp. 874-881, 2008. doi:10.1016/j.jbi.2008.01.014

[12] J. Park, K. Kang, "PcHD: Personalized classification of heartbeat types using a decision tree", Computers in Biology and Medicine, vol. 54, pp. 79-88, 2014. doi:10.1016/j.compbiomed.2014.08.013

[13] E.J. da S. Luz, T.M. Nunes, V.H.C. de Albuquerque, J.P. Papa, D. Menotti, "ECG arrhythmia classification based on optimum-path forest", Expert Systems with Applications, vol. 40, pp. 3561-3573, 2013. doi:10.1016/j.eswa.2012.12.063

[14] A.K. Mishra, S. Raghav, "Local fractal dimension based ECG arrhythmia classification", Biomedical Signal Processing and Control, vol. 5, pp. 114-123, 2010. doi:10.1016/j.bspc.2010.01.002

[15] L. Xu, D. Zhang, K. Wang, L. Wang, "Arrhythmic Pulses Detection Using Lempel-Ziv Complexity Analysis", EURASIP Journal on Applied Signal Processing, pp. 1-12, 2006. doi:10.1155/ASP/2006/18268

[16] J. Pan, W.J. Tompkins, "A real-time QRS detection algorithm", IEEE Trans Biomed Eng., vol. 32, no. 2, pp. 230-236, 1985. doi:10.1109/TBME. 1985.325532

[17] S. Dash, K.H. Chon, S. Lu, E.A. Raeder, "Automatic Real Time Detection of Atrial Fibrillation", Annals of Biomedical Engineering, vol. 37, no. 9, pp. 1701-1709, 2009. doi:10.1007/s10439-009-9740-z

[18] R. Karlsson, R. Hornsten, A. Rydberg, U. Wiklund, "Automatic filtering of outliers in RR intervals before analysis of heart rate variability in Holter recordings: a comparison with carefully edited data", BioMedical Engineering OnLine, pp. 1-12, 2012. doi:10.1186/1475-925X-11-2

[19] J.S. Richman, J.R. Moorman, "Physiological time-series analysis using approximate entropy and sample entropy", American Journal of Physiology, Heart and Circulatory Physiology, vol. 278, no. 6, pp. H2039-H2049, 2000. http://ajpheart.physiology.org/content/278/6/H2039.full

[20] D. E. Lake, J.S. Richman, M.P. Griffin, J.R. Moorman, "Sample entropy analysis of neonatal heart rate variability", Am. J. Physiol. Regul. Integr. Comp. Physiol., vol. 283, no. 3, pp. R789-97, 2002. doi:10.1152/ajpregu.00069.2002

[21] A. Lempel, J. Ziv, "On the Complexity of Finite Sequences", IEEE Transactions on Information Theory, vol. IT-22, no. 1, pp. 75-81, 1976. doi:10.1109/TIT.1976.1055501

[22] J. Ziv, "Coding Theorems for Individual Sequences", IEEE Transactions on Information Theory, vol. IT-24, no. 4, pp. 405-412, 1978. doi:10.1109/TIT.1978.1055911

[23] A.N. Kolmogorov, "Three approaches to the quantitative definition of information", Problems of Information Transmission, vol. 1, pp. 1-7, 1965. doi:10.1080/00207166808803030

[24] M.R. Titchener, "Generalized T-Codes: An Extended Construction Algorithm for Self-synchronizing Codes", TAMAKI T-CODE PROJECT SERIES, vol. 1, no. 4, pp. 1-8, 1995. doi:10.1049/ipcom:19960551

[25] U. Gunter, "Data compression and serial communication with generalized T-codes", Journal of Universal Computer Science, vol. 2, no. 11, pp. 769-795, 1996. doi:10.3217/jucs-002-11-0769

[26] K. Hamano, H. Yamamoto, "A Randomness Test based on TComplexity", IIECE Trans. Fundamentals, vol. E93, no. 7, pp. 13461354, 2010. doi:10.1109/ISITA.2008.4895570

[27] Y.-P. Huang, C.-Y. Huanga, S.-I. Liu, "Hybrid intelligent methods for arrhythmia detection and geriatric depression diagnosis", Applied Soft Computing, vol. 14, pp. 38-46, 2014. doi:10.1016/j.asoc.2013.09.021

[28] H. Xia, I. Asif, X. Xiaopeng Zhao, "Cloud-ECG for real time ECG monitoring and analysis", Computer Methods and Programs in Biomedicine, vol. 110, pp. 253-259, 2013. doi:10.1016/j.cmpb.2012.11.008

[29] Y.-O. Huang, C.-Y. Huanga, S.-I. Liu, "Hybrid intelligent methods for arrhythmia detection and geriatric depression diagnosis", Applied Soft Computing, vol. 14, pp. 38-46, 2014. doi:10.1016/j.asoc.2013.09.021

[30] X.-S. Zhang, Y.-S Zhu, N.V. Thakor, Z.-Z. Wang, "Detecting Ventricular Tachycardia and Fibrillation by Complexity Measure", IEEE Transactions on Biomedical Engineering, vol. 46, no. 5, pp. 548-555, 1999. doi:10.1109/10.759055

[31] D. Ge, N. Srinivasan, S.M. Krishnan, "Cardiac arrhythmia classification using autoregressive modeling", BioMedical Engineering OnLine, pp. 1-5, 2002. doi:10.1186/1475-925X-1-5

[32] S. W. Chen, "Two-stage discrimination of cardiac arrhythmias using a total least squares-based prony modeling algorithm", IEEE Trans Biomed Eng, vol. 47, pp. 1317-1326, 2000. doi:10.1109/10.871404

[33] M.H. Song, J. Lee, S.P. Cho, K.J. Lee, S.K. Yoo, "Support Vector Machine Based Arrhythmia Classification Using Reduced Features", International Journal of Control, Automation, and Systems, vol. 3, no. 4, pp. 571-579, 2005. http://www.ijcas.com/admin/paper/files/IJCAS_v3_n4_pp.571579.pdf

[34] T.F.L. de Medeiros, et al. "Heart arrhythmia classification using the PPM algorithm", Biosignals and Biorobotics Conference ISSNIP, pp. 1-5, 2011. doi:10.1109/BRC.2011.5740670

[35] C. Rotariu, V. Manta, R. Ciobotariu, "Integrated system based on wireless sensors network for cardiac arrhythmia monitoring", Advances in Electrical and Computer Engineering, vol. 13, no. 1, pp. 95-100, 2013. doi:10.4316/AECE.2013.01016

[36] V. Purdila, [section].G. Pentiuc, "Fast decision tree algorithm", Advances in Electrical and Computer Engineering, vol. 14, no. 1, pp. 65-68, 2014. doi:10.4316/AECE.2014.01010

[37] J. Nowakova, M. Prilepok, V. Snase, "Medical Image Retrieval Using Vector Quantization and Fuzzy S-tree", Journal of Medical Systems, Vol. 41, Issue 2, pp. 1-16, Febr. 2017, Plenum Press, New York, USA. doi:10.1007/[s.sub.1]0916-016-0659-2

[38] C. Pozna, N. Minculete, R.-E. Precup, L.T. KoCzy, A. Ballagi, "Signatures: Definitions, operators and applications to fuzzy modelling", Fuzzy Sets and Systems, vol. 201, pp. 86-104, August, 2012. doi:10.1016/j.fss.2011.12.016

[39] W. Chen, Z.Wang, H. Xie, W. Yu, "Characterization of surface EMG signal based on fuzzy entropy", IEEE Trans. Neural Syst. Rehabil. Eng., vol. 15, no. 2, pp. 266-272, 2007. doi:10.1109/TNSRE.2007.897025

Dragos AROTARITEI (1), Hariton COSTIN (1,2), Alexandra PAsARICA (3), Cristian ROTARIU (1)

(1) Grigore T. Popa University of Medicine and Pharmacy, Iasi, 700115, Romania

(2) Institute of Computer Science, Romanian Academy, Iaf, 700054, Romania

(3) Gheorghe Asachi Technical University, Iasi, 700506, Romania dragos.arotaritei@umfiasi.ro

Caption: Figure 1. Intermediary T-code, computed for [S.sup.(1,1).sub.(1.10)] = {0, 11, 1100, 1010, 1011}

Caption: Figure 2. The proposed architecture for cloud computing usage

Caption: Figure 3. Beat sequence for a windoe extracted from 8219 MIT AFIB database. Ectopic beats are replaced by average values from borders

Caption: Figure 4. A classification tree illustrating maximum performance (the best correct classification and the smallest number of nodes) for T-code algorithm, W=240 samples.

Caption: Figure 5. Accuracy of classification versus W for th=1.102 (threshold for binary coding)

Caption: Figure 6. Number of nodes in classification tree versus W for h=1.102 (threshold for binary coding)
TABLE I. PERFORMANCE OF THE PROPOSED SOLUTION IN COMPARISON
WITH OTHER ALGORITHMS

V. Algorithm                 Global Performance     Number of
                            (for all recordings),   arrhythmia
                              recognition rate       classes

Lempel-Ziv algorithm [31]           100%                3
Autoregressive modeling         93.2% to 100%           6
  GLM [32]
Prony algorithm [33]          95.24% to 97.78%          3
SVM (Support Vector          99.307% to 99.883%         6
  Machine) [34]
PPM algorithm [35]            91,74% to 99,37%          4
Sample Entropy (proposed)      76%, 1713 nodes          13
                               52.6%, 11 nodes          13
Lempel-Ziv algorithm            69%, 23 nodes           13
  (proposed)
T-code (proposed)              100%, 103 nodes          13
COPYRIGHT 2017 Stefan cel Mare University of Suceava
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Arotaritei, Dragos; Costin, Hariton; Pasarica, Alexandra; Rotariu, Cristian
Publication:Advances in Electrical and Computer Engineering
Article Type:Report
Date:Aug 1, 2017
Words:4771
Previous Article:Decoupled speed and torque control of IPMSM drives using a novel load torque estimator.
Next Article:A novel robust interacting multiple model algorithm for maneuvering target tracking.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |