Printer Friendly

Music emotion detection using hierarchical sparse kernel machines.

1. Introduction

Listening to music plays an important role in human's daily life and people usually gain much benefit from listening to music. Besides the leisure purpose, music listening has other application areas such as education, inspiration production, therapy, and marketing [1]. Sometimes people try to be in particular emotion state by listening to music. However, in such situation, people need to choose the music which can make human have particular feelings. They should listen to each song at least once to know the music emotion of each song, and the whole process takes much time. If people can use computer to detect the emotion content in music, the problem can be solved. Besides this application, music emotion detection technology can be applied to other area as well, such as music research, music recommendation, and music retrieval. For the limitless potential of music emotion detection technology, many researchers focus on detecting emotion in music.

Many researches on music emotion detection have been proposed in music emotion detection [2]. Existing research methods could be divided into two main categories: dimension approach and categorical approach. Dimension approach defines an emotion plane and views the emotion plane as a continuous emotion state space. Each position of the plane means an emotion state [3]. The acoustical features can be mapped to a point in the emotion plane [4]. Categorical approach works by categorized emotions into a number of emotion classes. Each emotion class represents an area in the emotion plane [3]. Different from dimension approach, each emotion class is defined clearly. In the training phase, acoustical features are directly used to train classifiers to recognize the corresponding emotion classes [5]. In this paper, the proposed method belongs to the second type.

In previous music emotion detection studies, many machine learning algorithms are applied. In [5], features were mapped into emotion categories on the emotion plane, and two support vector regressors were trained to predict the arousal and valence value. In [6], hierarchical framework was adopted to detect emotion from acoustic music data. The method has the advantage of emphasizing proper feature in different detection work. In [7], support vector machine was applied to detect emotion content in music. In [8], kernel-based class separability is used to weight features. After feature selection, principal component analysis and linear discriminant analysis were applied, and fc-nearest neighborhood (KNN) classifier was then implemented. In this paper, a music emotion detection system is proposed. The system establishes a hierarchical sparse kernel machine. In thefirstlevel, eight 2-classSVM models aretrained, witheight emotion classes as the target sides, respectively. It is noted that emotion perception is usually not based on a single acoustical feature but a combination of acoustical features [4, 9]. This paper adopts an acoustical feature set comprising root mean square energy (RMS energy), tempo, chromagram, MFCCs, spectrum centroid, spectrum spread, and ratio of a spectral flatness measure to a spectral center (RSS). Each of them is normalized. In the second level of hierarchical sparse kernel machines, a 2-class relevance vector machine (RVM) model with happiness as the target side and other emotion as the background side is trained. Besides, first-level decision vector is used as the feature in this level.

The rest of this paper is organized as follows. The system overview is described in Section 2. The features and first-level decision vector extraction are described in Section 3. Principle component analysis is described in Section 4. The introduction of SVM and RVM is described in Section 5. Section 6 shows our experimental results. The conclusion is given in Section 7.

2. System Overview

The block diagram of the proposed system is presented in Figure 1. The system mainly comprises two level sparse kernel machines. For the first-level SVMs, we use a set of acoustical features which includes RMS energy, tempo, chromagram, MFCCs, spectrum centroid, spectrum spread, and RSS. In Table 1, the used acoustical features are classified into four main types, that is, dynamic, rhythm, timbre, and tonality. Because each feature's scale is different, normalization of the whole feature set is performed [10]. After normalization, eight SVM models are trained to transform acoustical features into emotion profile features. Each of the eight SVM model is trained and tested using probability product kernel. We use the first-level decision vectors generated from the angry, happy, sad, relaxed, pleased, bored, nervous, and peaceful emotion classes. For an emotion, to calculate the corresponding value in the emotion profile features, we construct its 2-class SVM with calm emotion as the background side of the RVM. For the RVM, conventional radial basis function kernel is used, and the first-level decision vector extracted in the first level is utilized as the feature. To verify happiness emotion, a 2-class RVM with happiness as the target side and other emotion as the background side is constructed. For a tested music clip, the obtained probabilities value from this 2-class RVM is used to judge if this music clip belongs to happiness emotion or not.

3. Extraction of Acoustical Feature and First Level Decision Value Vector Feature

In the 2-level hierarchical sparse kernel machines, the first-level SVMs use acoustical features, while the second-level RVM adopts first-level decision vector. For acoustical features, the proposed system extracts RMS energy, tempo, chromagram, MFCCs, spectrum centroid, spectrum spread, and RSS. The extraction of these acoustical features as well as first-level decision vectors are described in the following.

3.1. Extraction of Acoustical Feature

3.1.1. RMS Energy. RMS energy is also called root mean square energy. It computes the global energy of input signal x [11]. The operation is defined as follows:

[X.sub.RMS] = [square root of [1/n][n.summation over (i=1)][x.sup.2.sub.i]] (1)

where n means signal's length in hundredth of a second by default.

3.1.2. Tempo. Many tempo estimation methods have been proposed. The estimation of tempo is based on detecting periodicities in a range of BPMs [12]. Firstly, significant onset events are detected in the frequency domain [11]. Then find the events that best represents the tempo of the song, which means to choose the maximum periodicity score for each frame separately.

3.1.3. Chromagram. Chroma which is also called harmonic pitch class profile has a strong relationship with the structure of music [13]. Chromagram is a joint distribution of signal strength over the variables of time and chroma. Chroma is a frame-based representation of audio and is similar to short time Fourier transform. In music clips, frequency components belonging to the same pitch class are extracted by chromagram and transformed to a 12-dimensional representation, including C, C#, D, D#, E, F, F#, G, G#, A, A#, and B. The chromagram can present the distribution of energy along the pitches or pitch classes [11,14]. In [14], chromagram is defined as the remapping of time-frequency distribution. The chromagram is extracted by


where [X.sub.t](n) means the logarithmic magnitude of discrete Fourier transform of the fth frame, and [Q.sub.k] is the number of elements in a subset of the discrete frequency space for each pitch class [15].

In Figure 2, the chromagram from a piece of music is exemplified.

3.1.4. Mel-Frequency Cepstral Coefficients (MFCCs). After signal is digitized, a large amount of information is not needed and cost plenty of storage space. Power spectrum is often adopted to encode the signal to solve the problem [16]. It is noted that MFCCs performs similar to human auditory perception system. The feature is adopted in various research topics, including speaker recognition, speech recognition, and music emotion recognition. For example, Cooper and Foote extracted MFCCs from music signal, and they found that MFCCs are similar to music timbre expression [17]. In [18], MFCCs were also proven to be having good performance in music recommendation.

MFCCs extraction is based on spectrum. The spectrum can be extracted by using discrete Fourier transform:

[x.sub.w](f) = [N.summation over (n=0)][x.sub.w](n)exp{-2[pi]fn/N}. (3)

After power spectrum is extracted, subband energies can be extracted by using Mel filter banks and then evaluate logarithm value of the energies as follows:


where [F.sub.h] is the discrete frequency index corresponding to the high cutoff frequency, [F.sub.l] is the discrete frequency index corresponding to low cutoff frequency, and L(i, f) is the amplitude of the fth discrete frequency index of the ith Mel window. The number of the Mel windows often ranges from 20 to 24. Finally, MFCCs is obtained by performing discrete cosine transform (DCT) [19]. In Figure 3, the average MFCCs values from a piece of music are exemplified.

3.1.5. Spectrum Centroid. Spectrum centroid is an economical description of the shape of the power spectrum [20-22]. Additionally, it is correlated with a major perceptual dimension of timbre, that is, sharpness. Figure 4 gives an example of a spectrum and its spectrum centroid obtained from a frame in a piece of music. The spectrum centroid value is 2638 Hz in this example.

3.1.6. Spectrum Spread. Spectrum spread is an economical descriptor of the shape of the power spectrum that indicates whether it is concentrated in the vicinity of its centroid or else spread out over the spectrum [20-22]. It allows differentiating between tone-like and noise-like sounds. In Figure 5, an example of spectrum spread from a piece of music is provided.

3.1.7. Ratio of a Spectral Flatness Measure to a Spectral Center (RSS). RSS was proposed by Vapnik for speaker-independent emotional speech recognition [23]. RSS is the ratio of spectrum flatness to spectrum centroid and is calculated by

RSS = [1000 x SF]/SC, (5)

where SF denotes spectrum flatness and SC represents spectrum centroid.

3.2. Extraction of First-Level Decision Vector. The acoustical feature set is utilized to generate the first-level decision vector with each element being a significant value of an emotion. This approach is able to interpret the emotional content by providing multiple probabilistic class labels, rather than a single hard label [24]. For example, happiness emotion not only contains happiness content, but also other properties that are similar to the content of peace. The similarity to peaceful may cause a music clip to be recognized as an incorrect emotion class. In this example, the advantage of first-level decision vector representation is its ability to convey both the evidences of happiness and peaceful emotions. This paper uses the significant values of eight emotions (angry, happy, sad, relaxed, pleased, bored, nervous, and peaceful) to construct an emotion profile feature vector. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the background side of the SVM.

4. Principle Component Analysis

PCA is an important mathematic technology in feature extraction approach. In this paper, PCA is implemented to reduce the dimensions of the extracted features. The first step of PCA is to calculate the d-dimension mean vector u and d x d covariance matrix [SIGMA] of the samples [25]. After that, the eigenvectors and eigenvalues are computed. Finally, the largest k eigenvectors are selected to form a d x k matrix M whose columns consist of the k eigenvectors. In fact, the other dimensions are noise. The PCA transformed data can be in the form

x' = [M.sup.T](x - u). (6)

5. Emotion Classifier

The emotion classifier used in the proposed system adopts a 2-level hierarchical structure of sparse kernel machines. The first-level SVMs use probability product kernel, while the second-level RVM adopts traditional radial basis function kernel with first-level decision vector feature.

5.1. Support Vector Machine. The SVM theory is an effective statistical technique and has drawn much attention on audio classification tasks [7]. An SVM is a binary classifier that creates an optimal hyperplane to classify input samples. This optimal hyperplane linearly divides the two classes with the largest margin [23]. Denote T = {([x.sub.i], [y.sub.i]), i = 1, 2, ..., N] as a training set for SVM; each pair ([x.sub.i], [y.sub.i]) means training sample [x.sub.i] belongs to a class [y.sub.i], where [y.sub.i] [member of] {+1,-1}. The fundamental concept is to choose a hyperplane which can classify T accurately while maximizing the distance between the two classes. This means to find a pair (w, b) such that

[y.sub.i](w x [x.sub.i] + b)>0, i = 1, ..., N, (7)

where w [member of] [R.sup.N] is normalized by itself and b [member of] R.

The pair (w, b) defines a separating hyperplane of equation

w x x + b = 0. (8)

If there exists a hyperplane satisfying (7), the set T is said to be linearly separable and we can change w and b so that

[y.sub.i](w x [x.sub.i] + b) > 1, i = 1, ..., N. (9)

According to (9), we can derive an objective function under constraint

min [[parallel]w[parallel].sup.2]

subject to [y.sub.i](w x [x.sub.i] + b) > 1, i = 1, ..., N. (10)

Since [[parallel]w[parallel].sup.2] is convex, we can solve (9) by applying the classical method of Lagrange multipliers:

min [[parallel]w[parallel].sup.2] + [[mu].sub.i][[y.sub.i](w x [x.sub.i] + b) - 1], i = 1, ..., N. (11)

We denote U = ([[mu].sub.1], [[mu].sub.2], ..., [[mu].sub.N]) as the N nonnegative Lagrange multipliers associated with (10). After solving (11), the optimal hyperplane has the following expansion:

[bar.w] = [N.summation over (i=1)][[mu].sub.i][y.sub.i][x.sub.i]. (12)

[bar.b] can be determined from U and from the Kuhn-Tucker conditions. Consider

[[mu].sub.i]([y.sub.i]([bar.w] x [x.sub.i] + [bar.b]) - 1) = 0, i = 1, 2, ..., N. (13)

Accordingly (11), the expected hyperplane is a linear combination of training samples. The corresponding training samples ([x.sub.i], [y.sub.i]) with nonzero Lagrange multipliers are called support vectors. Finally, the decision value from a new data point x can be written as

dec(x) = [N.summation over (i=1)][[mu].sub.i][y.sub.i][x.sub.i] x x + [bar.b]. (14)

Functions that satisfy Mercer's theorem can be used as kernels. In this paper, probability product kernel is adopted.

5.2. Probability Product Support Vector Machine. A function can be considered as kernel function if the function satisfies Mercer's theorem. Using Mercer's theory, we can introduce a mapping function [phi](x), such that k([x.sub.j], [x.sub.i]) = [phi]([x.sub.j])[phi]([x.sub.i]). This provides the ability of handling nonlinear data, by mapping the original input space [R.sup.d] into some other space.

In this paper, the probability product kernel is utilized. The probability product kernel is a method of measuring similarity between distributions, and it has the property of simple and intuitively compelling conception [26]. Probability product kernel computes a generalized inner product between two probability distributions in the Hilbert space. A positive definite kernel k: O x O [right arrow] R on input space O and examples [o.sub.1], [o.sub.2], ..., [o.sub.m] [member of] O are defined. Firstly, the input data x is mapped to a probability distribution p(x|O), which fits separate probabilistic models [p.sub.1](x), [p.sub.2](x), ..., [p.sub.m](x) to [o.sub.1], [o.sub.2], ..., [o.sub.m]. After that, a novel kernel [k.sup.prob]([p.sub.i], [p.sub.j]) between probability distributions on O is defined. At last, a kernel between examples is needed to be defined, and the kernel is equal to [k.sup.prob] between the corresponding distributions. Consider

k([o.sub.i], [o.sub.j]) = [k.sup.prob]([p.sub.i], [p.sub.j]). (15)

Finally, this kernel is applied to SVM and proceeded as usual. The probability product kernel between distributions [p.sub.i] and [p.sub.j] is defined as


where [p.sub.i] and [p.sub.j] are probability distributions on a space O. Assume that [p.sup.[rho].sub.i], [p.sup.[rho].sub.j] [member of] [L.sub.2](0). [L.sub.2] is a Hilbert space and [rho] is a positive constant. Probability product kernel allows us to introduce prior knowledge of data. In this paper, we assume a d-dimensional Gaussian distribution of our data.

5.3. First-Level Decision Vector Extraction. First-level decision vector presents perception probability of each of the eight emotion-specific decisions, which is extracted from input data by collecting decision values from each model. The decision value of SVM represents the degree of similarity between model and testing data. The advantage of similarity measure can be used to find out which model fits the data most accurately [24]. Using the first-level decision vector, the most probably perceived emotion in music can be detected.

5.4. Relevance Vector Machine. RVM is a development of SVM. Different from SVM, RVM tries to find a considerable number of weights which has highest sparsity [27]. The model defines a conditional distribution for target class y = {0,1}, given an input set {[x.sub.1], ..., [x.sub.n]} [28]. Assume that a training data can be a linear combination of weighted nonlinear basis functions [[phi].sub.i](x), which is transformed by a logistic sigmoid function as follows:

f(x;w) = [w.sup.T][phi](x), (17)

where w = ([w.sub.1], [w.sub.2], ..., [w.sub.G]), [phi](x) = [([[phi].sub.1](x), [[phi].sub.2](x), ..., [[phi].sub.G](x)).sup.T] denotes the weights. In order to make weight sparse, the Bayesian probabilistic framework is implemented to find the distribution over the weights instead of using pointwise estimation; therefore, a separate hyperparameter a for each of the weight parameters w is introduced. According to Bayes rule, the posterior probability of w is

p(w|y,a) = p(y|w,a)p(w|a)/p(y|a), (18)

where p(y|w,a) is likelihood, p(w|a) is prior conditioned on weights a = [[[a.sub.1], ..., [a.sub.n]].sup.T], and p(y|a) denotes the evidence. For the reason that y is a binary variable, the likelihood function can be given by


where [sigma](f) = 1/(1 + [e.sup.-f]) is the logistic sigmoid link function. According to (18), it can be found that a significant proportion of hyperparameters tend to be infinity, and the corresponding posterior distributions of weight parameters are concentrated at zero. Therefore, the basis functions that multiplied by these parameters will not be taken for reference when training the model. As a result, the model will be sparse.

6. Experimental Results

In the experiments, we collected one hundred songs from two websites to construct a music emotion database. These websites are All Music Guide [29] and [30]. As mentioned before, music may contain multiple emotions. If we know which emotion class a song most likely belongs to, we may know the main emotion of the song. Songs in are tagged by many people on the Internet. We choose the emotion which is tagged by most people to be the ground truth of data.

The database consists of nine classes of emotions, including happy, angry, sad, bored, nervous, relaxed, pleased, calm, and peaceful. Calm is taken as a model's opposite site when training models. Each emotion class contains twenty songs. Each song is thirty seconds long and is divided into five-second clips. Half of the songs are used as training data, and the others are used as testing data. In this paper, 240 music clips are tested. All of songs are western music and are encoded in 16 KHz WAV format. The used acoustical feature set are listed in Table 1. The whole feature set dimension is 30. The used SVM is based on LIBSVM library [31], and the used RVM is based on PTR toolbox [32]. The system performance is evaluated in terms of DET curve. Figure 6 depicts DET curve of the proposed happiness verification system. The proposed system can achieve 13.33% equal error rate (EER). From our results, we see that the system performs well on happiness emotion verification in music.

7. Conclusion

Detecting emotion in music has become the concern of many researchers in recent years. In this paper, we proposed a first-level decision-vector-based music happiness emotion detection system. The proposed system adopts a hierarchical structure of sparse kernel machines. First, eight SVM models are trained based on acoustical features with probability product kernel. Then eight decision values can be extracted to construct the first-level decision vector feature. After that, these eight decision values are considered as new feature to train and test a 2-class RVM with happiness as the target side. The probability value of the RVM is used to verify happiness content in music. Experimental results reveal that the proposed system can achieve 13.33% equal error rate (EER).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


[1] R. E. Milliman, "Using background music to affect the behavior of supermarket shoppers," Journal of Marketing, vol. 46, no. 3, pp. 86-91, 1982.

[2] C.-H. Yeh, H.-H. Lin, and H.-T. Chang, "An efficient emotion detection scheme for popular music," in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '09), pp. 1799-1802, Taipei City, Taiwan, May 2009.

[3] Y.-H. Yang, Y.-C. Lin, Y.-F. Su, and H. H. Chen, "A regression approach to music emotion recognition," IEEE Transactions on Audio, Speech and Language Processing, vol. 16, no. 2, pp. 448-457, 2008.

[4] Y. H. Yang and H. H. Chen, "Prediction of the distribution of perceived music emotions using discrete samples," IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2184-2196, 2011.

[5] B. Han, S. Rho, R. B. Dannenberg, and E. Hwang, "SMERS: music emotion recognition using support vector regression," in Proceedings of the International Conference on Music Information Retrieval, Kobe, Japan, 2009.

[6] L. Lu, D. Liu, and H.-J. Zhang, "Automatic mood detection and tracking of music audio signals," IEEE Transactions on Audio, Speech and Language Processing, vol. 14, no. 1, pp. 5-18, 2006.

[7] C.-Y. Chang, C.-Y. Lo, C.-J. Wang, and P.-C. Chung, "A music recommendation system with consideration of personal emotion," in Proceedings of the International Computer Symposium (ICS '10), pp. 18-23, Tainan City, Taiwan, December 2010.

[8] F. C. Hwang, J. S. Wang, P. C. Chung, and C. F. Yang, "Detecting emotional expression of music with feature selection approach," in Proceedings of the International Conference on Orange Technologies (ICOT '13), pp. 282-286, March 2013.

[9] K. Hevner, "Expression in music: a discussion of experimental studies and theories," Psychological Review, vol. 42, no. 2, pp. 186-204, 1935.

[10] M. Chouchane, S. Paris, F. Le Gland, C. Musso, and D.-T. Pham, "On the probability distribution of a moving target. Asymptotic and non-asymptotic results," in Proceedings of the 14th International Conference on Information Fusion (Fusion '11), pp. 1-8, July 2011.

[11] O. Lartillot and P. Toiviainen, "MIR in Matlab (II): a toolbox for musical feature extraction from audio," in Proceedings of the International Conference Music Information Retrieval, pp. 127-130, 2007, research/coe/materials/mirtoolbox.

[12] C.-W. Chen, K. Lee, and H.-H. Wu, "Towards a class-based representation of perceptual tempo for music retrieval," in Proceedings of the 8th International Conference on Machine Learning and Applications (ICMLA '09), pp. 602-607, December 2009.

[13] W. Chai, "Semantic segmentation and summarization of music," IEEE Signal Processing Magazine, vol. 23, no. 2, pp. 124-132, 2006.

[14] M. A. Bartsch and G. H. Wakefield, "Audio thumbnailing of popular music using chroma-based representations," IEEE Transactions on Multimedia, vol. 7, no. 1, pp. 96-104, 2005.

[15] X. Yu, J. Zhang, J. Liu, W. Wan, and W. Yang, "An audio retrieval method based on chromagram and distance metrics," in Proceedings of the International Conference on Audio, Language and Image Processing (ICALIP '10), pp. 425-428, Shanghai, China, November 2010.

[16] J. O. Garcia and C. A. R. Garcia, "Mel-frequency cepstrum coefficients extraction from infant cry for classification of normal and pathological cry with feed-forward neural networks," in Proceedings of the International Joint Conference on Neural Networks, pp. 3140-3145, July 2003.

[17] C. Y. Lin and S. Cheng, "Multi-theme analysis of music emotion similarity for jukebox application," in Proceedings of the International Conference on Audio, Language and Image Processing (ICALIP '12), pp. 241-246, July 2012.

[18] B. Shao, M. Ogihara, D. Wang, and T. Li, "Music recommendation based on acoustic features and user access patterns," IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, pp. 1602-1611, 2009.

[19] W.-Q. Zhang, D. Yang, J. Liu, and X. Bao, "Perturbation analysis of mel-frequency cepstrum coefficients," in Proceedings of the International Conference on Audio, Language and Image Processing (ICALIP '10), pp. 715-718, Shanghai, China, November 2010.

[20] H. G. Kim, N. Moreau, and T. Sikora, MPEG-7 Audio and Beyond: Audio Content Indexing and Retrieval, Wiley, New York, NY, USA, 2005.

[21] ISO-IEC/JTCI SC29 WGII Moving Pictures Expeti Group, "Information technology--multimedia content description interface--part 4: Audio," Comittee Draft 15938-4, ISO/IEC, 2000.

[22] M. Casey, "MPEG-7 sound-recognition tools," IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 6, pp. 737-747, 2001.

[23] V. Vapnik, Statistical Learning Theory, Wiley, New York, NY, USA, 1998.

[24] E. Mower, M. J. Mataric, and S. Narayanan, "A framework for automatic human emotion classification using emotion profiles," IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 5, pp. 1057-1070, 2011.

[25] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, New York, NY, USA, 2nd edition, 2001.

[26] T. Jebara, R. Kondor, and A. Howard, "Probability product kernels," Journal of Machine Learning Research, vol. 5, pp. 819-844, 2004.

[27] F. A. Mianji and Y. Zhang, "Robust hyperspectral classification using relevance vector machine," IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 6, pp. 2100-2112, 2011.

[28] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics), Springer, New York, NY, USA, 2nd edition, 2007.

[29] "The All Music Guide,"

[30] ","

[31] C. C. Chang and C. J. Lin, "LIBSVM: a library for support vector machines," 2001,

[32] "Pattern Recognition Toolbox,"

Yu-Hao Chin, Chang-Hong Lin, Ernestasia Siahaan, and Jia-Ching Wang

Department of Computer Science and Information Engineering, National Central University, Taoyuan 32001, Taiwan

Correspondence should be addressed to Jia-Ching Wang;

Received 30 August 2013; Accepted 17 October 2013; Published 3 March 2014

Academic Editors: B.-W. Chen, S. Liou, and C.-H. Wu

TABLE 1: The proposed acoustical feature set.

Feature class   Feature name (dimension
                of feature)

Dynamic         RMS energy (1)
Rhythm          Tempo (1)
Timbre          MFCCs (13), spectrum
                  centroid (1), spectrum
                  spread (1), RSS (1)
Tonality        Chromagram (12)
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching
Publication:The Scientific World Journal
Article Type:Report
Date:Jan 1, 2014
Previous Article:Robust stabilization control based on guardian maps theory for a longitudinal model of hypersonic vehicle.
Next Article:Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters