Printer Friendly

A Novel Approach for Activity Recognition with Down-Sampling 1D Local Binary Pattern Features.


In recent years, many studies have been carried out in the field of recognition, monitoring, and discrimination of human activities. Automatic recognition of physical activities is usually referred to as human activity recognition (HAR). There are two main approaches for performing activity recognition tasks. These are vision-based (computer-vision) and sensor-based approaches. The computerized vision-based approach generally works well under laboratory conditions; but it can fail in real-world scenarios due to clutter, variable light intensity and contrast.

Sensor-based HAR systems aim to capture the condition of the user and its environment by using heterogeneous sensors connected to the body of the person. This system allows continuous monitoring of many physiological signals reflecting the state of human actions. In recent years, it has been observed that individual sensors and mobile devices developed in mobile recognition applications have been used to develop microelectronics and computer systems. The low cost of the wearable sensors, their small size, and low energy consumption allow the work on human activity recognition to be obtained from everyday activities. HAR systems are used in many different areas, especially in health, military, sports, music and security areas [1-11]. HAR is a rapidly growing field of research that can provide valuable information about the health of people in order to monitor health variables outside the hospital [12]. Patients with any disease and a part of their treatment; which include exercising, can benefit from the HAR system [13-15]. Walking, running recognition of activities can be beneficial for providing feedback to the relatives of the patient or doctor about the behavior of patients. Apart from this, patients can be monitored to detect abnormal activities and to prevent undesirable consequences [3-4].

HAR systems consist of two principal stages. The first stage is a feature that distinguishes activities from each other. The second stage is the stage of machine learning (classification). The success of the system relies on the useful features obtained in the first step [3].

In the HAR systems, it is necessary to determine the features that can distinguish the activities effectively from each other [16]. Feature extraction plays an essential role in the pattern recognition process as classification performance degrades if the features are not well chosen. In this study, the DS-1D-LBP method developed for sensor signals. The DS-1D-LBP method was developed from the 1D-LBP method. The DS-1D-LBP is the application of the 1D-LBP method to different levels of sensor signals. After applying DS-1D-LBP to the mobile signals, statistical features have been obtained from newly formed signals. The ELM method is used in the classification phase. ELM is a model of an artificial neural network (ANN) with input hidden weights and random output weights computed analytically. It is much faster than classical ANN models.

The DS-1D-LBP method has some significant advantages. The first advantage is that this method uses individual values in all marks for feature extraction. The second advantage is that the implementation of this model is easy and fast. Another advantage is that it can extract different feature groups depending on the window length (WL) and related sampling parameters. To test the proposed DS-1D-LBP + ELM approach, the data set in the Kaggle database repository was used. High performance was obtained for activity recognition by using DS-1D-LBP + ELM approach.


Sensor-based HAR systems have been an important research area in recent years. HAR usually employs body-mountable activity sensors such as EEGs, EMGs accelerometers, gyroscopes, and magnetometers. Sensor technology is an essential step in the success of activity recognition. In the context of activity recognition, four specific groups such as environmental properties, acceleration, ground information, and physiological properties are measured using wearable sensors. Nowadays, mobile device sensors are seen to be used extensively for HAR.

HAR systems generally consist of two principal stages. In the first stage, features or information are extracted from the signals. Feature extraction from signals is one of the most critical stages of HAR because the success of the HAR system depends on the features extraction.

In the second stage, the classification process is carried out using the features obtained in the first stage. Different machine learning methods are used for the classification process.

In Table I, we provide a comprehensive list of the activity recognition algorithms proposed in the literature. Also, an overview of these methods can be found in [24].


The Activity Sense Dataset [33]: Smartphone Sensor dataset, which is shared in the Kaggle repository, was used to test the proposed methods in this study. This dataset contains time series data generated by accelerometer and gyroscope sensors (position, gravity, rotation rate, and user acceleration). The data were obtained by using the iPhone 6s brand smartphone with Sensing Kit, which collected information from the Core Activity frame on IOS devices placed in the right front pockets of people for the specified 6 activities. A total of 24 participants performed six activities in gender, age, weight and height range in 15 study groups in the same settings and conditions. All subjects were asked to perform six different activities (downstairs, upstairs, walking, jogging, sitting and standing) by wearing straight shoes [33]. These 6 activities are given in Table II.

The smartphone used to collect data in the study includes a three-axis accelerometer and a three-axis gyroscope. Since there are two sensor units each with a three-axis device, six signals are recorded from each sensor unit. The six recorded values are the acceleration (by the accelerometer in the mobile phone) and the rotation speed of the x, y, z-axes (by the gyroscope in the mobile phone) in the order x, y, z-axes respectively. The data were obtained in three different sub-folders. The data folder, usually obtained in time series, was used because it contains both accelerometer and data that can be captured by the gyroscope.

Each experiment has a multivariable time series so that the time series of 12 features was obtained as in Table III. IV. METHOD

A. One (1) Dimensional Local Binary Pattern (1D-LBP)

In this study, the local binary pattern (LBP) method, which is commonly used in image processing in two dimensions, has been developed as a feature extraction method from raw sensor signals by rendering it one-dimensional [34-35]. For this purpose, the signals recorded from various sensor groups were analyzed, and the features in these signals were extracted using the developed one-Dimensional Local Binary Pattern (1D-LBP) method. In Fig. 1, the stages of extracting features with the 1D-LBP method were described systematically.

As shown in Fig. 1, each point on the signal is compared between its previous and next neighbors to form the 1D-LBP operator [36-37]. The binary result obtained from the comparison result is then converted to the decimal value to obtain 1D-LBP labels for each point. To create a binary string, the neighbor was taken up to P. Then, [P.sub.c] was picked up to the point P/2 before and after the center signal. For P = 8 as shown in Fig. 1(b), the four neighboring signals were taken before ([P.sub.0], [P.sub.1], [P.sub.2], [P.sub.3]) and after ([P.sub.4], [P.sub.5], [P.sub.6] [P.sub.7]) from each center signal (Pc). All neighbors are compared with the central point. If the neighboring [P.sub.i] value is greater than or equal to the center value this [P.sub.i] value is taken 1 otherwise 0. The comparison is performed for all the points on the signal by Equation (1). X is a point on the raw signal; The formulation of 1D-LBP is given as Eq. (1-2) [38]:

t = [P.sub.i] - [P.sub.c] (1)

[mathematical expression not reproducible] (2)

Where [P.sub.i] and [P.sub.c] represents the neighbors, center point, respectively. After constructing the binary string with equation (1), this binary value is converted to a decimal value, and the [P.sub.c] value is labeled as 1D-LBP value. These stages are performed for all points along the signal. By applying this procedure, signals were formed, and the range of the values change between 0 to 255. These values are called local binary patterns. The frequency of each of these values represents a pattern. When the histogram of the 1D-LBP signal it produces, it can be seen that there are 256 different patterns then statistical features are obtained from these histograms.

B. Down Sampling 1D-LBP

In this method, a sampling method is proposed to generate a new signal from the signal. According to the defined window length (WL) parameter, new signals are created by taking sample values from signals. According to the value of WL, only one value is taken from the values included in a window. For example, if WL = 4, the window has four signals values. The average, median, the minimum or maximum value of these four signal values are taken for the signal to be newly generated. The number of new signal samples is reduced by the value of WL. For example, if this process is applied once in operation, the sample size of the resulting new signal varies according to equation (3) below.

[mathematical expression not reproducible] (3)

New signals can be created at different levels by repeating the same process on the newly formed signal. The graph for the implementation of the method is shown in Fig. 2.

Fig. 2 (a) shows the original signals. In Fig. 2 (b), the signals are the result signals of applying the DS-Means method. Fig. 2 (c) shows the result signals of applying the DS-Means method to the signals in (b). Because of the reapplication of the DS method, the size of the newly formed signals decreases.

The DS-1D-LBP method is the application of the 1D-LBP method to the signals at each level. By applying the 1D-LBP method at each level, more patterns were obtained. For the separation of signals, it has provided micro-macro patterns. The number of levels to be applied by the DS method depends on the user. The DS method has three important parameters. The first parameter is the number of levels. The second parameter is WL. With the WL defined on the signals, different new marks are obtained. The last parameter is the parameter of which of the sample values that enter a window or which computation result is to be moved to a higher level. The values are moved to a higher level by taking the minimum, maximum, medians or averages of the values entered into the window.

Features are obtained by applying 1D-LBP to the signals at each level with DS-1D-LBP, a more significant feature space is formed. Although the cost of calculation seems to increase, it is a significant advantage to provide patterns that will affect the success rate.

C. Statistical Features

In this study, the length of the specified S signal is N, the statistical features obtained from this signal are given in Table IV, assuming that the values X={[X.sub.1], [X.sub.2],..., [X.sub.N]} on the new signal formed after applying the above 1D-LBP methods are used.

D. Proposed Method for Activity Recognition

The activity recognition system is shown in Fig. 3. The proposed system consists of six stages. The processes that occur at each stage are briefly summarized in Fig. 3.

Block 1-2: This dataset contains time series data, which is included position, gravity, rotation rate, and user acceleration generated by accelerometer and gyroscope sensors.

Block 3: At this stage, 1D-LBP and DS-1D-LBP transformations were applied to the obtained signals.

Block 4: At this stage, histograms of the newly formed 1D-LBP and DS-1D-LBP signals are generated.

Block 5: In this section, statistical features (Table IV) are obtained from histograms.

Block 6: Classification procedures were performed using an extreme learning machine (ELM) using statistical features according to a 10-fold cross-validation test.

E. Extreme Learning Machine

The Extreme Learning Machine (ELM) is a model with a single hidden layer feed-forward artificial neural network (ANN) that calculates input weights and random output weights analytically. In the hidden layer in ELM, besides the activation functions such as sigmoidal, sine, Gaussian and hard-limit, besides ANN, differentiable or interrupted activation functions can be used [39-40].

The performance of conventional forward feed artificial neural networks, such as momentum, learning rate, etc. it depends on some parameters. In such networks, parameters such as weights and threshold values need to be updated with gradient-based learning algorithms. However, the learning process takes a long time, and the error can be a plug at a local point to achieve good performance. Changing the value of the momentum may prevent the fault from sticking to a local point. However, it does not affect the long duration of the learning process.

In ELM, input weights and threshold values are randomly generated, but output weights are obtained analytically [41]. The ELM network is a privatized version of a single hidden layer feed-forward ANN model. Fig. 4 shows the shape of a single hidden layer feed-forward ANN.

In the Fig. 4, X=([X.sub.1] [X.sub.2], [x.sub.3],..., [X.sub.N]) is input and Y is defined to specify output properties. The mesh with M neurons in the hidden layer is mathematically expressed as in Eq. 4. [39].

[mathematical expression not reproducible] (4)

[W.sub.i] = ([W.sub.i1], [W.sub.i2]... []) weights at the input layer, [[beta].sub.i] = ([[beta].sub.i1], [[beta].sub.i2]... [[beta]]) weight at the output layer, [b.sub.i] denotes the threshold values of hidden layer neurons and [O.sub.k] output values. g(.) is the activation function [42]. In a network with N inputs, the purpose is to obtain the error to [mathematical expression not reproducible] or to obtain the min [mathematical expression not reproducible] error. Thus, the equation may be represented as in Eq. 4 [41].

[mathematical expression not reproducible] (5)

In the above equation,

H[beta] = Y (6)

It can be expressed [41]. Here

[mathematical expression not reproducible] (7)


[mathematical expression not reproducible] (8)

can be expressed [42]. In the Eq. 7, H is the hidden layer output matrix. Training a network in a conventional forward-fed ANN is the search for the smallest squares solution in the HP=Y linear equation in ELM [43]. The ELM algorithm can be summarized in three stages as follows [44-45].

Stage 1: [W.sub.i] = ([W.sub.i1], [W.sub.i2],...,[]) input weights and hidden layer bi threshold values are randomly generated.

Stage 2: H hidden layer output is calculated.

Stage 3: [beta] output weights are calculated according to [beta] =[H.sup.+]Y. Y is to decide the property.

F. Performance Measures

Accuracy, precision, recall, and f-score were used to demonstrate the performance of the methods proposed in the study. These success measures are calculated as in Eq. (9-12).

[mathematical expression not reproducible] (9)

precision = TP /(TP + FP) (10)

recall = TP /(TP + FN) (11)

[mathematical expression not reproducible] (12)

In these equations, T, F, P, and N express true, false, positive, and negative, respectively. For example, TP correctly counts the number of positive samples; FN indicates the number of false negative samples.

Accuracy is the most popular and simple method used to determine success. Also, this rate is defined as the ratio of the correctly classified (TP+TN) sample number to the total sample number (TP+TN+FP+FN).

Precision: To give the degree of precision of the classifier result. The number of positively labeled samples (TP) is the proportion of positive samples (TP+FP).

Recall The ratio of positively labeled samples (TP) to the total number of truly positive samples (TP+FN).

F-score: To calculate using precision and recall metrics. The system is optimized for precision or recall.


The data set used in the study consists of 360 samples for six activity types. The dataset has been downloaded from the data repository website. Firstly, 1D-LBP and DS-1D-LBP transformations were applied to the signals. After this transformations process, classification with ELM was performed using the statistical features obtained. ELM is a single hidden layer feed-forward ANN model in which the input weights and threshold values are randomly generated, but the output weights are mathematically based on regression. The classification was performed according to 10-fold cross-validation. Classification operations were performed according to the parameters of window length (WL) and sampling ([DS.sub.mmimum], [DS.sub.maximum], [DS.sub.mean], and [DS.sub.median]). The purpose of these parameters is to capture different patterns and increase the success rate.

The success rates obtained for both training and test data are given in Table V and Table VI.

As seen in Table V and Table VI, ELM appears to have achieved significant success in activity recognition. It is seen that the highest success rates are obtained with the features of WL parameter 4. As the value of WL increases, the success rate tends to decrease because of the decrease in the number of samples transferred to the upper-level in the DS-1D-LBP method. Also, this transformation caused the loss of information. Another critical point is the process of specifying samples that are moved to a higher level in the sampling process as in the table. The average of the samples entering the defined window influenced the success rate of the upper-level transport. In this stage, the sampling process is performed more successfully than the other parameters ([DS.sub.minimum], [DS.sub.maximum], [DS.sub.median]) by the [DS.sub.mean] parameter. According to the sampling parameter, the success rate with [DS.sub.mean], [DS.sub.maximum], [DS.sub.median], and [DS.sub.minimum] were observed. It should be decided whether parameters are successful for different signal groups or not. When the test sets were tested for success (Table VI), the highest success rate was 96.8950% with DS-1D-LB[P.sub.WL=4,DS-Means]. In general, both the training and the test phase had high accuracy results with DS-1D-LBP+ELM.

Non-derivatization or discrete activation functions can be used unlike classical ANN model apart from activation functions such as sigmoidal, sine, Gaussian and hard-limit in hidden layer in the ELM model. Table VII shows the success rates that are obtained by using the different activation functions of the features obtained via the DS-Means sampling method.

In Table VII, the most successful results were observed using the sigmoid activation function in the hidden layer of ELM. With this activation function, high results were obtained for all values of WL (4, 5, 6, 7 and 8). The highest success rate was observed as 96.8950% with DS-1D-LB[P.sub.WL=4,DS-Means] + EL[M.sub.Sigmoid]. The second most successful function after the sigmoid activation function was obtained as the Tangent function.

In the DS-1D-LBP method, different feature groups can be created according to the sampling parameter. For all signals, features can be used in each level. Besides, features extracted at all levels can be used together. The dataset used consists of 12 channels in the study. In each channel, 12 statistical features are obtained. Therefore, 144 features are

According to Table VIII, in the case of using the features that are produced in every level, more successful results are observed with the first level features. For this reason, the number of samples at level 1 or the length is longer. Therefore, features that are more distinctive have been extracted. The higher the level, the lower the success rate. The highest success rate was observed with DS-1D-LBPstep=1, Ds-Means+ELMsigmoid, and 88.4039%, respectively when the features at all levels were used independently.

The success of the ELM model depends on the activation function used in the neurons and the number of neurons in the hidden layer. These parameters are determined by checking the fault by trial and lapse (error) method. In this study; experiments were performed with the number of neurons in the hidden layer between 1-100. The success rates of ELM according to the number of hidden neurons are given in Table IX.

As seen in Table IX, when the number of neurons in the hidden layer of the ELM increase, the success rate rises. Experiments were performed up to 100 neurons in the hidden layer. The higher number of neurons causes the ELM to memorize the data. Surface graphs of success rates observed with different numbers of neurons in the hidden layer for both training and test sets are given in Fig. 5 and Fig. 6.

To test the success of the ELM, different machine learning methods were used to classify the same data set. Activity classification was performed according to 10-fold cross-validation test with Random Forest (RF), Multi-Layer Perceptron (MLP), Knn and SVM. DS-1D-LBPwL,Ds-Means features are used. The success rates observed with different machine learning methods are given in Table X.

As seen in Table X, the most successful model was observed as ELM. ELM was found to be much more successful than other methods. The second most successful model after ELM is Knn. The most unsuccessful models were obtained from SVM and MLP.

Accuracy, precision, reminder, and f-score were used to demonstrate the performance of the ELM method in the study. The results of using the DS-1D-LB[P.sub.Means] features and the performance measures observed with ELM are given in Table XI. For each move, success and other performance measures are given. As can be seen, successful results were obtained with DS-1D-LBP+ELM.

To test the effectiveness of the proposed feature extraction method for activity recognition, it is compared with the feature groups obtained from the same signals both in time and frequency domain. From the frequency and time domain, the specified features in Table IV (min, max, mean, median, energy, kurtosis, distortion, entropy, correlation and variation coefficient) were obtained. Classification by ELM was performed using these features. The success rates are shown in Table XII.

From Table XII, it is observed that the statistical features obtained by the proposed feature extraction methods have higher success than those obtained from the time and frequency domain. It has also been observed that the features obtained from the signals in the frequency domain are more successful than the same features obtained from the same signals in the time domain in activity recognition.

A. Dimension Reduction

The data set used in the study consists of 432 input features. Too many features cause the calculation cost for the model used. A large number of features slows the training of the created model and makes it difficult to create a good model. Although ELM is a high-speed model, in this section feature selection has been performed on the data set. It is aimed the number of features is reduced without any significant effect on the success of the ELM. In this study, the probability based consistency Subset Eval method proposed by Liu and Setiono (1996) was applied [46]. The method can also be applied to the Weka program. The classification performance of ELM after feature selection is given in Table XIII.

As shown in Table XII, after the feature selection process, the dimensions of the input feature vectors to be used in the classification process have been reduced. In classifications with fewer features, ELM seems to have successfully learned in the training process. However, in the test phase, high results were observed when WL = 4 and WL = 7. In the case of WL = 7, the use of the sub-feature group achieved a high success rate of 92.7911%.

B. Discussion

In this study, a new feature extraction approach for HAR is proposed. The proposed DS-1D-LBP obtained distinctive features for HAR. Using the obtained features, 96.87% success rate was observed with ELM. It is seen that a significant success rate is obtained according to the success rates obtained using mobile sensor signals compared to the literature.


In this study, a novel approach to activity recognition is proposed using the sensor signals of smartphones. In recent years, the sensors of smartphones have been widely used in activity recognition because smartphones are devices that people use in their daily life not only for communication but also for other purposes. Accelerometers, gyroscopes, barometers and light sensors on these devices provide significant advantages for activity recognition and similar applications. In this study, a novel feature extraction approach for activity recognition is proposed for smartphones using inertial sensors such as accelerometers and gyroscopes. In order to obtain effective features from sensor signals, DS-1D-LBP has been proposed. By using the features obtained by this method, ELM classification operations were performed. Different activation functions of ELM and experiments with different numbers of neurons were performed in the hidden layer. As a result, the highest success rate was obtained with DS-1D-LB[P.sub.WL=4,DS-Means] + EL[M.sub.Sigmoid] with 96.87% success rate. The proposed DS-1D-LBP method is easily applied to the signals. Therefore, it can be used in real-time activity recognition systems. It can also be successful in decomposing more movements that are complex.


This work was supported by the Scientific Research Projects Coordination Unit of Siirt University as a project with the number 2018-SIUFEB-DR-009. The authors of this article thank Siirt University for their support.


[1] N. Gyorbiro, A. Fabian, G. Homanyi, "An activity recognition system for mobile phones," Mobile Networks and Applications, vol. 14, no. 1, pp. 82-91, February 2009. doi:10.1007/s11036-008-0112-y

[2] T. Choudhury, G. Bordello, J. A. Landay, L. LeGrand, J. Lester, A. Rahimi, B. Harrison, "The mobile sensing platform: An embedded activity recognition system," IEEE Pervasive Computing, vol. 7, no. 2, pp. 32-41, April 2008. doi:10.1109/MPRV.2008.39

[3] O. D. Lara, M. A. Labrador, "A survey on human activity recognition using wearable sensors," IEEE Communications Surveys and Tutorials, vol. 15, no. 3, pp. 1192-1209, October 2013. doi:10.1109/SURV.2012.110112.00192

[4] J. Yin, Q. Yang, J. J. Pan, "Sensor-based abnormal human-activity detection," IEEE Transactions on Knowledge & Data Engineering, vol. 20, no. 8, pp. 1082-1090, August 2007. doi:10.1109/TKDE.2007.1042

[5] J. R. Kwapisz, G. M. Weiss, S. A. Moore, "Activity recognition using cell phone accelerometers," ACM SIGKDD Explorations Newsletter, vol. 12, no. 2, pp. 74-82, December 2011. doi:10.1145/1964897.1964918

[6] M. M. Hassan, M. Z. Uddin, A. Mohamed, A. Almogren, "A robust human activity recognition system using smartphone sensors and deep learning," Future Generation Computer Systems, vol. 81, pp. 307-313, April 2018. doi:10.1016/j.future.2017.11.029

[7] H. F. Nweke, Y. W. Teh, M. A. Al-Garadi, U. R. Alo, "Deep Learning Algorithms for Human Activity Recognition using Mobile and Wearable Sensor Networks: State of the Art and Research Challenges," Expert Systems with Applications, vol. 105, pp. 233-261, September 2018. doi:10.1016/j.eswa.2018.03.056

[8] A. Tharwat, H. Mahdi, M. Elhoseny, A. E. Hassanien, "Recognizing human activity in mobile crowdsensing environment using the optimized k-NN algorithm," Expert Systems With Applications, vol. 107, pp. 32-44, October 2018. doi:10.1016/j.eswa.2018.04.017

[9] A. Jordao, L. A. B. Torres, W. R. Schwartz, "Novel approaches to human activity recognition based on accelerometer data," Signal, Image and Video Processing, vol. 12, no. 7, pp. 1-8, October 2018. doi:10.1007/s 11760-018-1293-x

[10] R. San-Segundo, H. Blunck, J. Moreno-Pimentel, A. Stisen, M. Gil-Martin, "Robust Human Activity Recognition using smartwatches and smartphones," Engineering Applications of Artificial Intelligence, vol. 72, pp. 190-202, June 2018. doi:10.1016/j.engappai.2018.04.002

[11] A. Jain, V. Kanhangad, "Human Activity Classification in Smartphones Using Accelerometer and Gyroscope Sensors," IEEE Sensors Journal, vol. 18, no. 3, pp. 1169-1177, February 2018. doi:10.1109/JSEN.2017.2782492

[12] X. Wang, D. Rosenblum, Y. Wang, "Context-aware mobile music recommendation for daily activities," In Proceedings of the 20th ACM international conference on Multimedia ACM, pp. 99-108, October 2012. doi:10.1145/2393347.2393368

[13] A. Avci, S. Bosch, M. Marin-Perianu, R. Marin-Perianu, P. Havinga, "Activity Recognition Using Inertial Sensing for Healthcare, Wellbeing and Sports Applications: A Survey," In Proceedings of the 23rd International Conference on Architecture of Computing Systems (ARCS), Hannover, Germany, pp. 1-10, 22-23 February 2010.

[14] J. Sung, C. Ponce, B. Selman, A. Saxena, "Human activity detection from RGBD images," In Proceedings of the AAAI Workshop on Plan, Activity, and Intent Recognition, 2011.

[15] S. Chernbumroong, S. Cang, A. Atkins, H. Yu, "Elderly activities recognition and classification for applications in assisted living," Expert Systems with Applications, vol. 40, no. 5, pp. 1662-1674, April 2013. doi:10.1016/j.eswa.2012.09.004

[16] K. Tural, E. Akdogan, "Classification of Human Movements with Artificial Neural Networks using the data of smartphone detectors," Automatic Control Turkish National Conference, TOK2017, September 2017.

[17] P. Siirtola, J. Roning, "Recognizing human activities user-independently on smartphones based on accelerometer data," IJIMAI, vol. 1, no. 5, pp. 38-45, June 2012.

[18] F. Foerster, J. Fahrenberg, "Motion pattern and posture: Correctly assessed by calibrated accelerometers," Behavior Research Methods, Instruments, & Computers, vol. 32, no. 3, pp. 450-457, September 2000. doi:10.3758/BF03200815

[19] H. Ponce, M. L. Martinez-Villasenor, L. Miralles-Pechuan, "A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks," Sensors, vol. 16, no. 7, July 2016. doi:10.3390/s16071033

[20] O. Tuncel, K. Altun, B. Barshan, "Jiroskop Sinyallerinin Islenmesiyle Bacak Hareketlerinin Siniflandirilmasi," Conference: IEEE 17th Conference on Signal Processing, Communications, and Applications (SIU 2009), 2009. doi:978-1-4244-4436-6/09/

[21] J. Mantyjarvi, J. Himberg, T. Seppanen, "Recognizing human motion with multiple acceleration sensors," In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, IEEE Press, pp. 747-52, 2001. doi:10.1109/ICSMC.2001.973004

[22] N. A. Capela, E. D. Lemaire, N. Baddour, "Feature Selection for Wearable Smartphone-Based Human Activity Recognition with Able-bodied, Elderly, and Stroke Patients," PLOS ONE, vol. 10, no. 4, April 2015. doi:10.1371/journal.pone.0124414

[23] J. Howcroft, J. Kofman, E.D. Lemaire, "Feature selection for elderly faller classification based on wearable sensors," Journal of Neuro-Engineering and Rehabilitation, vol. 14:47, May 2017. doi:10.1186/s12984-017-0255-9

[24] R. Damasevicius, M. Vasiljevas, J. Salkevicius, M. Wozniak, "Human Activity Recognition in AAL Environments Using Random Projections," Computational and Mathematical Methods in Medicine, May 2016. doi:10.1155/2016/4073584

[25] V. Elvira, A. Naazabal-Renteria, A. Artes-Rodrigues, "A novel feature extraction technique for human activity recognition," Statistical Signal Processing (SSP), IEEE, Gold Coast, VIC, Australia, August 2014. doi:10.1109/SSP.2014.6884604

[26] L. Atallah, B. Lo, R. King, G. Z. Yang, "Sensor positioning for activity recognition using wearable accelerometers," IEEE Transactions on Biomedical Circuits and Systems, vol. 5, no. 4, pp. 320-329, July 2011. doi:10.1109/TBCAS.2011.2160540

[27] A. Bayat, M. Pomplun, D. A. Tran, "A study on human activity recognition using accelerometer data from smartphones," Procedia Computer Science, vol. 34, pp. 450-457, August 2014.

[28] J. Parkka, M. Ermes, P. Korpipaa, J. Mantyjarvi, J. Peltola, I. Korhonen, "Activity classification using realistic data from wearable sensors," IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 119-128, January 2006. doi:10.1109/TITB.2005.856863

[29] U. Maurer, A. Smailagic, D. P. Siewiorek, M. Deisher, "Activity recognition and monitoring using multiple sensors on different body positions," In Wearable and Implantable Body Sensor Networks (BSN 06), IEEE, Cambridge, Mass, USA, pp. 113-116, April 2006. doi:10.1109/BSN.2006.6

[30] O. C. Kurban, "Classifcation of human activities with wearable sensors without feature extraction," Master Thesis, Yildiz Technical University, Institute of Science, Istanbul, Turkey, 2014.

[31] Al Jeroudi, M. A. Ali, M. Latief, R. Akmeliawati, "Online Sequential Extreme Learning Machine Algorithm Based Human Activity Recognition Using Inertial Data," Control Conference (ASCC), 10th Asian, Kota Kinabalu, Malaysia, 2015. doi:10.1109/ASCC.2015.7244597

[32] A. Alvarez-Alvarez, J. M. Alonso, G. Trivino, "Human activity recognition in indoor environments through fusing information extracted from the intensity of WiFi signal and accelerations," Information Sciences, vol. 233, pp. 162-182, June 2013. doi:10.1016/j.ins.2013.01.029

[33] M. Malekzadeh, R. G. Clegg, A. Cavallaro, H. Haddadi, "Protecting sensory data against sensitive inferences," In Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems ACM, April 2018. doi:10.1145/3195258.3195260

[34] Y. Kaya, M. Uyar, R. Tekin, S. Yildirim, "1D-local binary pattern based feature extraction for classification of epileptic EEG signals," Applied Mathematics and Computation, vol. 243, pp. 209-219, September 2014. doi:10.1016/j.amc.2014.05.128

[35] Y. Kaya, "Hidden pattern discovery on epileptic EEG with 1-D local binary patterns and epileptic seizures detection by grey relational analysis," Australasian Physical & Engineering Sciences in Medicine, vol. 38, no. 3, pp. 435-446, September 2015. doi:10.1007/s13246-015-0362-5

[36] O. F. Ertugrul, Y. Kaya, R. Tekin, M. N. Almali, "Detection of Parkinson's disease by shifted one-dimensional local binary patterns from gait," Expert Systems with Applications, vol. 56, pp. 156-163, September 2016. doi:10.1016/j.eswa.2016.03.018

[37] O. F. Ertugrul, Y. Kaya, R. Tekin, "A novel approach for SEMG signal classification with adaptive local binary patterns," Medical & biological engineering & computing, vol. 54, no. 7, pp. 1137-1146, July 2016. doi:10.1007/s11517-015-1443-z

[38] Y. Kaya, O. F. Ertugrul, "A novel approach for spam email detection based on shifted binary patterns," Security and Communication Networks, vol. 9, no. 10, pp. 1216-1225, January 2016. doi:10.1002/sec.1412

[39] S. Suresh, S. Saraswathi, N. Sundararajan, "Performance enhancement of extreme learning machine for multi-category sparse data classification problems," Engineering Applications of Artificial Intelligence, vol. 23, no. 7, pp. 1149-1157, October 2010. doi:10.1016/j.engappai.2010.06.009

[40] Y. Kaya, L. Kayci, R. Tekin, O. F. Ertugrul, "Evaluation of texture features for automatic detecting butterfly species using extreme learning machine," Journal of Experimental & Theoretical Artificial Intelligence, vol. 26, no. 2, pp. 267-281, January 2014. doi:10.1080/0952813X.2013.861875

[41] G. B. Huang, Q. Y. Zhu, C. K. Siew, "Extreme learning machine: theory and applications," Neurocomputing, vol. 70, no. 1-3, pp. 489-501, December 2006. doi:10.1016/j.neucom.2005.12.126

[42] R. Hai-Jun, O. Yew-Soon, T. Ah-Hwee, Z. Zexuan, "A fast pruned-extreme learning machine for classification problem," Neurocomputing, vol. 72, no. 1-3, pp. 359-366, December 2008. doi:10.1016/j.neucom.2008.01.005

[43] G. B. Huang, Q. Y. Zhu, C. K. Siew, "Extreme learning machine: a new learning scheme of feedforward neural networks," In Neural Networks, Proceedings, IEEE International Joint Conference on, vol. 2, pp. 985-990, July 2004. doi:10.1109/IJCNN.2004.1380068

[44] S. D. Handoko, K. C. Keong, Y. S. Ong, G. L. Zhang, V. Brusic, "Extreme learning machine for predicting HLA-peptide binding," Lecture Notes in Computer, vol. 3973, pp. 716-721, May 2006. doi:10.1007/11760191_105

[45] Q. Yuan, Z. Weidong, L. Shufang, C. Dongmei, "Epileptic EEG classification based on Extreme learning machine and nonlinear features," Epilepsy Research, vol. 96, no. 1-2, pp. 29-38, September 2011. doi:10.1016/j.eplepsyres.2011.04.013

[46] H. Liu, R. Setiono, "A probabilistic approach to feature selection - A filter solution," In 13th International Conference on Machine Learning, vol. 96, pp. 319-327, 1996.

(1) Siirt University, Computer Engineering, 56100, Turkey

(2) Siirt University, Electrical and Electronics Engineering, 56100, Turkey

This work was supported by the Scientific Research Projects Coordination Unit of Siirt University as a project with the number 2018-SIUFEB-DR-009.

Authors            Sensors         Model                Accuracy

Siirtola et al.    Smartphone      Decision tree        95.00%
[17]               (accelerometer
                    sensor)        Knn/QDA
Foerster et al.    Accelerometer   Chi-square and       96.80%
[18]                               Cramer's rule
Ponce et al.       Accelerometer,  Artificial           97%
[19]               gyroscope, and  hydrocarbon
                   magnetometer    networks (AHN)
Tun9el et al.      Gyroscope       bayesian decision    80-96%
[20]                               theory, Knn,
                                   ANN, SVM
Mantyjarvi et      Accelerometer   PCA and ICA          83-90%
al. [21]
Capela et al.      Smartphone      Naive Bayes,         90-97%
[22]               (accelerometer  SVM, j48
                   and gyroscope)  Decision Tree
Howcroft et al.    Accelerometer   Correlation-based    95%
[23]               and pressure    feature selection,
                   insole          fast correlation-ba
                                   (FCBF), and
Damasevicius       Accelerometer,  Jaccard distance     95.6%
et al. [24]        gyroscope
Elvira et al.[25]  APDM OPAL       Hidden Markov        89%
                   miniature       models (HMM)
                   gyroscope and
Atallah et al.     Accelerometer   Relief Feature       90%
[26]                               Selection, Simba
                                   Feature Selection
                                   Bayes, Knn
Bayat et al. [27]  three-axis      ANN, SVM,            81-91%
                   accelerometers  Random Forest
Parkka et al.      GPS,ECG,        decision trees       86%
[28]               accelerometer
Maurer et al.      multiple        Decision trees,      92%
[29]               sensors         Knn, Naive
Kurban [30]        accelerometer   ANN, SVM, NB,        83-98%
Al Jeroudi et al.  Smartphone      Online Sequential    82.05%
[31]               (accelerometer  Extreme Learning
                   and gyros)      Machine
Alvarez et al.     WiFi signal and Fuzzy modeling       83.70%
[32]               accelerometer   methodology
                                   called HILK


Activity Code  Activity Name

A1             Sitting activity
A2             Standing up activity
A3             Move down to the downstairs
A4             Move up to upstairs
A5             Walking activity
A6             Jogging activity


Signal Code  Signal Name

S1           attitude.roll
S2           attitude.pitch
S3           attitude.yaw
S4           gravity.x
S5           gravity.y
S6           gravity.z
S7           rotationRate.x
S8           rotationRate.y
S9           rotationRate.z
S10          userAcceleration.x
S11          userAcceleration.y
S12          userAcceleration.z


Number  Feature

1       Mean
2       Standard Deviation
3       Energy
4       Entropy
5       Correlation
6       Sequential absolute differences
7       Kurtosis
8       Skewness
9       Median
10      Minimum
11      Maximum
12      Coefficient of variance

Number  Formula
1       [mathematical expression not reproducible]
2       [mathematical expression not reproducible]
3       [mathematical expression not reproducible]
4       [mathematical expression not reproducible]
5       [mathematical expression not reproducible]
6       [mathematical expression not reproducible]
7       [mathematical expression not reproducible]
8       [mathematical expression not reproducible]
9       [f.sub.9] = median {[x.sub.1], [x.sub.2], [x.sub.3],
        [x.sub.4],..., [X.sub.N]}
10      [f.sub.10] = min {[x.sub.1], [x.sub.2], [x.sub.3],
        [x.sub.4],..., [X.sub.N]}
11      [f.sub.11] = max {[x.sub.1] [x.sub.2], [x.sub.3],
        [x.sub.4],..., [X.sub.N]}
12      [mathematical expression not reproducible]


WL    [DS.sub.minimum]  [DS.sub.maximum]  [DS.sub.mean]

WL=4  99.3211           99.3520           99.4135
WL=5  99.1671           99.3212           99.1355
WL=6  98.8892           98.8586           99.2596
WL=7  98.7039           98.6724           98.8886
WL=8  98.6112           99.1666           98.9503

WL    [DS.sub.median]

WL=4  99.4137
WL=5  99.3521
WL=6  98.8894
WL=7  98.9505
WL=8  99.2903


WL    [DS.sub.minimum]  [DS.sub.maximum]  [DS.sub.mean]

WL=4  94.9003           95.8537           96.8950
WL=5  90.0054           93.3742           94.5108
WL=6  89.3494           92.4895           95.0123
WL=7  89.8951           93.2881           93.3328
WL=8  89.7119           93.0385           90.1525

WL    [DS.sub.median]

WL=4  94.2021
WL=5  92.2385
WL=6  92.1697
WL=7  90.5221
WL=8  90.0071


Function    WL=4     WL=5     WL=6     WL=7     WL=8

  Sigmoid   96.8950  94.5108  95.0123  93.2881  93.0385
Hard Limit  81.6389  78.0823  80.3972  79.4081  76.8603
  Basis     94.4702  91.5904  92.7868  90.9132  86.0072
  Basis     91.4196  89.4101  90.8118  88.3591  84.7647
  Sin       94.7819  93.8983  92.5275  91.4710  86.7897
  Tanh      95.2882  92.5098  93.0693  93.2982  87.2104


Level           #Features  WL=4     WL=5     WL=6     WL=7     WL=8

DS Level 1      144        88.4039  87.1720  85.6288  85.6244  75.4647
DS Level 2      144        87.7539  75.8574  76.6913  71.4425  68.4587
DS Level 3      144        77.7181  62.6962  76.0299  72.1870  82.5840
DS Level 1-2    288        92.7996  92.2347  93.3671  90.8595  87.4835
DS Level 1-3    288        94.7280  90.2659  86.9406  91.0260  77.8372
DS Level 1-2-3  432        96.8950  94.5108  95.0123  93.2881  93.0385


#Neuron  WL=4     WL=5     WL=6     WL=7     WL=8

10       80.6243  75.2471  78.8988  74.1198  68.5632
20       88.5468  86.0491  87.8089  86.3738  78.0532
30       93.3109  89.9658  91.1148  90.8623  83.6281
40       95.2906  91.0861  92.2973  91.7096  86.1761
50       95.5529  92.2754  93.4080  92.9924  88.2023
60       96.1933  93.0545  94.1879  93.2987  85.5622
70       96.3793  92.2874  94.1654  92.8061  86.1216
80       96.3931  91.9582  94.8550  93.6267  86.5479
90       96.4176  91.7484  94.1624  93.0818  88.7497
100      96.8950  94.5108  95.0123  93.2881  93.0385


Model  WL=4     WL=5     WL=6     WL=7     WL=8

ELM    96.8950  94.5108  95.0123  93.2881  93.0385
RF     95.0000  93.6111  95.5556  93.3889  93.0111
MLP    77.7778  70.2778  74.7222  76.3889  74.3889
Knn    93.8889  92.5000  94.7222  93.8889  92.2222
SVM    78.6111  71.6667  76.1110  70.8333  72.6330


Class               precision  recall  f-score  accuracy

Walking Downstairs  0,921      0,972   0,946    97.22
      Jogging       0,959      0,979   0,969    97.91
      Sitting       0,920      0,958   0,939    95.83
      Standing      0,920      0,958   0,939    95.83
Walking Upstairs    0,921      0,972   0,946    97.22
      Walking       0,921      0,972   0,946    97.22
      Averages      0,927      0,966   0,967    96.87


Feature Groups    #Features  Accuracy

  Time Domain     144        85.21
Frequency Domain  144        91.27
   1D-LBP         144        88.40
  DS-1D-LBP       432        96.87


WL    #Features  Train Accuracy  Test Accuracy

WL=4  69         97.9940         88.8219
WL=5  65         98.5803         81.3047
WL=6  75         96.5122         73.3844
WL=7  61         98.3948         92.7911
WL=8  54         98.8272         70.8539
COPYRIGHT 2019 Stefan cel Mare University of Suceava
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kuncan, Fatma; Kaya, Yilmaz; Kuncan, Melih
Publication:Advances in Electrical and Computer Engineering
Article Type:Case study
Date:Feb 1, 2019
Previous Article:Methods for Estimating One-Diode Model Parameters of Photovoltaic Panels and Adjusting to Non-Nominal Conditions.
Next Article:Efficient Shape Classification using Zernike Moments and Geometrical Features on MPEG-7 Dataset.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |