Printer Friendly

A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model.

1. Introduction

State sequence for the Hidden Markov Model (HMM) is invisible but we can track the most likelihood state sequence based on the model parameter and a given observational sequence. The restored state has many applications especially when the hidden state sequence has meaningful interpretations for making predictions. For example, Ciriza et al. [1] have determined the optimal printing rate based on the HMM model parameter and an optimal time-out based on the restored states. The classical Viterbi algorithm is the most common technique for tracking state sequence from a given observational sequence [2]. However, it does not measure the uncertainty present in the solution. Proakis and Salehi [3] proposed a method for measuring the error of a single state but this method is unable to measure the error of the entire state sequence. Hernando et al. [4] proposed a method of using entropy for measuring the uncertainty of the state sequence of a first-order HMM tracked from a single observational sequence with a length of T. The method is based on the forward recursion algorithm integrated with entropy for computing the optimal state sequence. Mann and McCallum [5] developed an algorithm for computing the subsequent constrained entropy of HMM which is similar to the probabilistic model conditional random fields (CRF). Ilic [6] developed an algorithm based on forward-backward recursion over the entropy semiring, namely, the Entropy Semiring Forward-Backward (ESRFB) algorithm for a firstorder HMM with a single observational sequence. ESRFB has lower memory requirement as compared with Mann and McCallum's algorithm for subsequent constrained entropy computation.

This paper is organized as follows. In Section 2, we define the generalized HHMM and present the extended entropy-based algorithm for computing the optimal state sequence developed by Hernando et al. [4] from a first-order to a generalized HHMM. In Section 3, we first review the high-order transformation algorithm proposed by Hadar and Messer [7] and then we introduce EOTFA, an entropy-based order-transformation forward algorithm for computing the optimal state sequence for any generalized HHMM. We discuss future research in Section 4 on entropy associated with state sequence of a generalized high-order HMM.

2. Entropy-Based Decoding Algorithm with an Extended Approach

The uncertainty appearing in a HHMM can be quantified by entropy. This concept is applied for quantifying the uncertainty of the state sequence tracked from a single observational sequence and model parameters. The entropy of the state sequence equals 0 if there is only one possible state sequence that could have generated the observation sequence as there is no uncertainty in the solution. The higher this entropy the higher the uncertainty involved in tracking the hidden state sequence. We extend the entropy-based Viterbi algorithm developed by Hernando et al. [4] for computing the optimal state sequence from a first-order HMM to a high-order HMM, that is, kth-order, where k [greater than or equal to] 2. The state entropy in HHHM is computed recursively for the reason of reducing the computational complexity from O([N.sup.kT]) which used direct evaluation method to O(T[N.sup.k+1]) in a HHMM where N is the number of states, T is the length of observational sequence, and k is the order of the Hidden Markov Model. In terms of memory space, the entropy-based Viterbi algorithm is more efficient which requires O([N.sup.k+1]) as compared to the classical Viterbi algorithm which requires O(T[N.sup.k+1]). The memory space for the classical Viterbi algorithm is dependent on the length of the observational sequence due to the involvement of the process of "back tracking" in computing the optimal state sequence.

Before introducing the extended entropy-based Viterbi algorithm, we define a generalized high-order HMM, that is, kth-order HMM, where k [greater than or equal to] 2. These are followed by the definition of forward and backward probability variables for a generalized high-order HMM. These variables are required for computing the optimal state sequence in our decoding algorithm.

2.1. Elements of HHMM. HHMM involves two stochastic processes, namely, hidden state process and observation process. The hidden state process cannot be directly observed. However, it can be observed through the observation process. The observational sequence is generated by the observation process incorporated with the hidden state process. For a discrete HHMM, it must satisfy the following conditions.

The hidden state process [{[q.sub.t]}.sup.T.sub.t=2-k] is the kth-order Markov chain that satisfies

P([q.sub.t] | [{[q.sub.l]}.sub.l<t]) = P([q.sub.t] | [{[q.sub.l]}.sup.t- 1.sub.l=t-k]), (1)

where [q.sub.t] denotes the hidden state at time t and [q.sub.t] [member of] S, where S is the finite set of hidden states.

The observation process [{[o.sub.t]}.sup.T.sub.t=1] is incorporated with the hidden state process according to the state probability distribution that satisfies

P([o.sub.t] | [{[o.sub.l]}.sub.l<t], [{[q.sub.l]}.sub.l[less than or equal to]t]) = P([o.sub.t] | [{[q.sub.l]}.sup.t.sub.l=t-k+1]), (2)

where [o.sub.t] denotes the observation at time t and [o.sub.t] [member of] V, where V is the finite set of observation symbols.

The elements for the kth-order discrete HMM are as follows:

(i) Number of distinct hidden states, N

(ii) Number of distinct observed symbols, M

(iii) Length of observational sequence, T

(iv) Observational sequence, O = {[o.sub.t], t = 1, 2, ..., T]

(v) Hidden state sequence, Q = {[q.sub.t], t = 2 - k, ..., T]

(vi) Possible values for each state, S = {[s.sub.i], i = 1, 2, ..., N]

(vii) Possible symbols per observation, V = {[v.sub.w], w = 1, 2, ..., M]

(viii) Initial hidden state probability vector, [mathematical expression not reproducible]

where [mathematical expression not reproducible] is the probability that model will transit from state [mathematical expression not reproducible],

[mathematical expression not reproducible] (3)

[mathematical expression not reproducible] is the probability that model will transit from state [mathematical expression not reproducible] and state [mathematical expression not reproducible],

[mathematical expression not reproducible], (4)

[mathematical expression not reproducible] is the probability that model will transit from state [mathematical expression not reproducible], state [mathematical expression not reproducible], ..., and state [mathematical expression not reproducible],

[mathematical expression not reproducible] (5)

(ix) State transition probability matrix, [mathematical expression not reproducible],

where [A.sub.j-1] is the j-dimensional state transition probability matrix and [mathematical expression not reproducible], is the probability of a transition to state [mathematical expression not reproducible] given that it has had a transition from state [mathematical expression not reproducible] to state [mathematical expression not reproducible] to ... and to state [mathematical expression not reproducible] where j = 2, ..., k + 1,

[mathematical expression not reproducible] (6)

(x) Emission probability matrix, [mathematical expression not reproducible],

where [B.sub.1] is the two-dimensional emission probability matrix and [mathematical expression not reproducible] is a probability of observing [v.sub.m] in state [mathematical expression not reproducible],

[mathematical expression not reproducible], (7)

where [B.sub.j] is the j + 1-dimensional emission probability matrix and [mathematical expression not reproducible] is a probability of observing [v.sub.m] in state [mathematical expression not reproducible] at time t - j + 1, [mathematical expression not reproducible] at time t - j + 2, ..., and [mathematical expression not reproducible] at time t where j = 2, ..., k,

[mathematical expression not reproducible] (8)

For the kth-order discrete HMM, we summarize the parameters by using the components of [mathematical expression not reproducible].

Note that throughout this paper, we will use the following notations.

(i) [q.sub.1:t] denotes [q.sub.1], [q.sub.2], ..., [q.sub.t]

(ii) [o.sub.1:t] denotes [o.sub.1], [o.sub.2], ..., [o.sub.t]

2.2. Forward and Backward Probability. The entropy-based algorithm proposed by Hernando et al. [4] for computing the optimal state sequence of a first-order HMM is incorporated with forward recursion process. Recently, high-order HMM are widely used in a variety of applications such as speech recognition [8, 9] and longitudinal data analysis [10, 11]. For the HHMM, the Markov assumption has been weakened since the next state not only depends on the current state but also depends on other historical states. The state dependency is subjected to the order of HMM. Hence we have to modify the classical forward and backward probability variables for the HHMM, that is, the kth-order HMM where k [greater than or equal to] 2 are shown as follows.

Definition 1. The forward variable [[alpha].sub.t]([i.sub.2], [i.sub.3], ..., [i.sub.k+1]) in the kth-order HMM is a joint probability of the partial observation sequence [o.sub.1], [o.sub.2], ..., [o.sub.t] and the hidden state of [mathematical expression not reproducible] at time t - k + 1, [mathematical expression not reproducible] at time t - k + 2, ..., [mathematical expression not reproducible] at time t where 1 [less than or equal to] t [less than or equal to] T. It can be denoted as

[mathematical expression not reproducible]. (9)

From (9), t = 1 and 1 [less than or equal to] [i.sub.2], [i.sub.3], ..., [i.sub.k+1] [less than or equal to] N, we obtain the initial forward variable as

[mathematical expression not reproducible]. (10)

From (9), (10), and 1 [less than or equal to] [i.sub.1], [i.sub.2], ..., [i.sub.k], [i.sub.k+1] [less than or equal to] N, we obtain the recursive forward variable for t = 2, ..., T,

[mathematical expression not reproducible]. (11)

Definition 2. The backward probability variable [[beta].sub.t]([i.sub.1], [i.sub.2], ..., [i.sub.k]) in the kth-order HMM is a conditional probability of the partial observation sequence [o.sub.t+1], [o.sub.t+2], ..., [o.sub.T] given the hidden state of [mathematical expression not reproducible] at time [mathematical expression not reproducible] at time t - k + 2, ..., and [mathematical expression not reproducible] at time t. It can be denoted as

[mathematical expression not reproducible], (12)

where 1 [less than or equal to] t [less than or equal to] T, 1 [less than or equal to] [i.sub.1], [i.sub.2], ..., [i.sub.k] [less than or equal to] N.

We obtain the initial backward probability variable as

[[beta].sub.T]([i.sub.1], [i.sub.2], ..., [i.sub.k]) = 1. (13)

From (12) and (13), we obtain the recursive backward probability variable for t = 1, 2, ..., T - 1,

[mathematical expression not reproducible]. (14)

The probability of the observational sequence given the model parameter for the first-order HMM can be represented by using the classical forward probability and backward probability variables [2]. We extend it to HHMM by using our modified forward probability and backward probability variables. The proof is due to Rabiner [2].

Definition 3. Let [a.sub.t]([i.sub.1], [i.sub.2], ..., [i.sub.k]) and [[beta].sub.t]([i.sub.1], [i.sub.2], ..., [i.sub.k]) be the forward probability variable and backward probability variable, respectively; P(O | [lambda]) is presented using the forward and backward probability variables as

P(O | [lambda]) = P([o.sub.1], ..., [o.sub.T] | [lambda])

= [N.summation over ([i.sub.i]=1)][N.summation over ([i.sub.2]=1)] ... [N.summation over ([i.sub.k]=1)] [[alpha].sub.t]([i.sub.1], [i.sub.2], ..., [i.sub.k]) [[beta].sub.t] ([i.sub.1], [i.sub.2], ..., [i.sub.k]). (15)

Proof.

[mathematical expression not reproducible]. (16)

We now normalize both of the forward and backward probability variables. These normalized variables are required as the intermediate variables for the algorithm of state entropy computation.

Definition 4. The normalized forward probability variable [[??].sub.t]([i.sub.2], [i.sub.3], ..., [i.sub.k+1]) in the kth-order HMM is defined as the probability of the hidden state of [mathematical expression not reproducible] at time t - k + 1, [mathematical expression not reproducible] at time [mathematical expression not reproducible] at time t given the partial observation sequence [o.sub.1], [o.sub.2], ..., [o.sub.t] where 1 [less than or equal to] t [less than or equal to] T.

[mathematical expression not reproducible]. (17)

From (10), (17), t = 1, and 1 [less than or equal to] [i.sub.1], [i.sub.2], ..., [i.sub.k] [less than or equal to] N, we obtain the initial normalized forward probability variable as

[mathematical expression not reproducible], (18)

where

[mathematical expression not reproducible]. (19)

From (11), (17), (18), and t = 2, ..., T, 1 [less than or equal to] [i.sub.1], [i.sub.2], ..., [i.sub.k], [i.sub.k+1] [less than or equal to] N, we obtain the recursive normalized forward probability variable as

[mathematical expression not reproducible], (20)

where

[mathematical expression not reproducible]. (21)

Note that the normalization factor [r.sub.t] ensures that the probabilities sum to one and it also represents the conditional observational probability [2].

Definition 5. The normalized backward probability variable [[??].sub.t]([i.sub.1], [i.sub.2], ..., [i.sub.k]) in the kth-order HMM is defined as the quotient of a conditional probability of the partial observation sequence [o.sub.t+1], [o.sub.t+2], ..., [o.sub.T] given the hidden state of [mathematical expression not reproducible] at time t - k + 1, [mathematical expression not reproducible] at time t - k + 2, ..., [mathematical expression not reproducible] at time t, and a conditional probability of the partial observation sequence [o.sub.t+1], [o.sub.t+2], ..., [o.sub.T] given the entire observation sequence [o.sub.1], [o.sub.2], ..., [o.sub.T]. It can be denoted as

[mathematical expression not reproducible], (22)

where 1 [less than or equal to] t [less than or equal to] T, 1 [less than or equal to] [i.sub.1], [i.sub.2], ..., [i.sub.k] [less than or equal to] N

From (14) and (22), we obtain the recursive normalized backward probability variable as

[mathematical expression not reproducible], (23)

where

[mathematical expression not reproducible]. (24)

Our extended algorithm includes the normalized forward recursion given by (18) and (20). The extended algorithm for the kth-order HMM requires O(T[N.sup.k+1]) calculations if we include either normalized forward recursion given by (18) and (20) or the normalized backward recursion given by (13) and (23). The direct evaluation method, in comparison, requires O([N.sup.T+k-1]) calculations where N is the number of states, T is the length of observational sequence, and k is the order of the Hidden Markov Model.

2.3. The Algorithm by Hernando et al. Hernando et al. [4] are pioneers for using entropy to compute the optimal state sequence of a first-order HMM with a single observational sequence. This algorithm is based on a first-order HMM normalized forward probability,

[[??].sub.t](j) = P([q.sub.t] = [s.sub.j] | [o.sub.1], [o.sub.2], ..., [o.sub.t]), (25)

auxiliary probability,

P([q.sub.t-1] = [s.sub.i] | [q.sub.t] = [s.sub.j], [o.sub.1:t]), (26)

and intermediate entropy,

[H.sub.t]([s.sub.j]) = H ([q.sub.1:t-1] | [q.sub.t] = [s.sub.j], [o.sub.1:t]). (27)

The entropy-based algorithm for computing the optimal state sequence of a first-order HMM is as follows [4].

(1) Initialization. For t = 1 and 1 [less than or equal to] j [less than or equal to] N,

[mathematical expression not reproducible]. (28)

(2) Recursion. For t = 2, ..., T - 1, and 1 [less than or equal to] j [less than or equal to] N,

[mathematical expression not reproducible]. (29)

(3) Termination

[mathematical expression not reproducible]. (30)

This algorithm performs the computation linearly with respect to the length of the observation sequence with computational complexity O(T[N.sup.2]). It requires the memory space of O([N.sup.2]) which indicates that the memory space is independent of the observational sequence.

2.4. The Computation of the Optimal State Sequence for a HHMM. The extended classical Viterbi algorithm is commonly used for computing the optimal state sequence for HHMM. This algorithm provides the solution along with its likelihood. This likelihood probability can be determined as follows.

P([q.sub.1], [q.sub.2], ..., [q.sub.T] | [o.sub.1], [o.sub.2], ..., [o.sub.T])

= P([q.sub.1], [q.sub.2], ..., [q.sub.T], [o.sub.1], [o.sub.2], ..., [o.sub.T])/P([o.sub.1], [o.sub.2], ..., [o.sub.T]). (31)

This probability can be used as a measure of quality of the solution. The higher the probability of our "solution," the better our "solution." Entropy can also be used for measuring the quality of the state sequence of the kth-order HMM. Hence, state entropy is proposed to be used for obtaining the optimal state sequence of a HHMM.

We define entropy of a discrete random variable as follows [12].

Definition 6. The entropy H(X) of a discrete random variable X with a probability mass function P(X = x) is defined as

H(X) = -[summation over (x[member of]X)]P(x) [log.sub.2]P(x). (32)

When the log has a base of 2, the unit of the entropy is bits. Note that 0 log 0 = 0.

From (32), the entropy of the distribution for all possible state sequences is as follows:

[mathematical expression not reproducible]. (33)

For the first-order HMM, if all [N.sup.T] possible state sequences are equally likely to generate a single observational sequence with a length of T, then the entropy equals T [log.sub.2] N. The entropy is kT [log.sub.2] N in the kth-order HMM if all [N.sup.kT] possible state sequences are equallylikelyto produce the observational sequence.

For this extended algorithm, we require an intermediate state entropy variable, [mathematical expression not reproducible] that can be computed recursively using the previous variable, [mathematical expression not reproducible].

We define the state entropy variable for the kth-order HMM as follows.

Definition 7. The state entropy variable, [mathematical expression not reproducible], in the kth-order HMM is the entropy of all the state sequences that lead to state of [mathematical expression not reproducible] at time t - k + 1, [mathematical expression not reproducible] at time t - k + 2, ..., and [mathematical expression not reproducible] at time t, given the observation sequence [o.sub.1], [o.sub.2], ..., [o.sub.t]. It can be denoted as

[mathematical expression not reproducible]. (34)

We analyse the state entropy for the kth-order HMM in detail, shown as follows.

From (34) and t = 1, we obtain the initial state entropy variable as

[mathematical expression not reproducible]. (35)

From (34) and (35) we obtain the recursion on the entropy for t = 2, ..., T, and 1 [less than or equal to] [i.sub.1], [i.sub.2], ..., [i.sub.k+1] [less than or equal to] N,

[mathematical expression not reproducible], (36)

where

[mathematical expression not reproducible]. (37)

The auxiliary probability [mathematical expression not reproducible] is required for our extended entropy-based algorithm. It can be computed as follows:

[mathematical expression not reproducible]. (38)

For the final process of our extended algorithm, we are required to compute the conditional entropy H([q.sub.1:T] | [o.sub.1:T]) which can be expanded as follows:

[mathematical expression not reproducible]. (39)

The following basic properties of HMM and entropy are used for proving Lemma 8.

(i) According to the generalized high-order HMM, state [q.sub.t-k-j+1], j [greater than or equal to] 2 and [q.sub.t] are statistically independent given [q.sub.t-k], [q.sub.t-k+1], [q.sub.t-k+2], ..., [q.sub.t-1]. The same applies to [q.sub.t-k-j+1], j [greater than or equal to] 2 and [o.sub.t] are statistically independent given [q.sub.t-k], [q.sub.t-k+1], [q.sub.t-k+2], ..., [q.sub.t-1].

(ii) According to the basic property of entropy [12],

H(X | Y = y) = H(X)

if X and Y are independent. (40)

We now introduce the following lemma for the kth-order HMM. The following proof is due to Hernando et al. [4].

Lemma 8. For the kth-order HMM, the entropy of the state sequence up to time t - k - 1, given the states from time t - k to time t - 1 and the observations up to time t - 1, is conditionally independent of the state and observation at time t

[mathematical expression not reproducible]. (41)

Proof.

[mathematical expression not reproducible]. (42)

Our extended entropy-based algorithm for computing the optimal state sequence is based on normalized forward recursion variable, state entropy recursion variable, and auxiliary probability. From (18), (20), (35), (36), (38), and (39), we construct the extended entropy-based decoding algorithm for the kth-order HMM as follows:

(1) Initialization. For t = 1 and 1 [less than or equal to] [i.sub.2], [i.sub.3], ..., [i.sub.k+1] [less than or equal to] N,

[mathematical expression not reproducible]. (43)

(2) Recursion. For t = 2, ..., T - 1, and 1 [less than or equal to] [i.sub.1], [i.sub.2], ..., [i.sub.k+1] [less than or equal to] N,

[mathematical expression not reproducible]. (44)

(3) Termination

[mathematical expression not reproducible]. (45)

This extended algorithm performs the computation of the optimal state sequence linearly with respect to the length of observational sequence which requires O(T[N.sup.k+1]) calculation and it has memory space that is independent of the length of observational sequence, O([N.sup.k+1]), since [mathematical expression not reproducible] should be computed only once in tth iteration and, having been used for the computation of (t + 1)th, they can be deleted from the space storage.

2.5. Numerical Illustration for the Second-Order HMM. We consider a second-order HMM for illustrating our extended entropy-based algorithm in computing the optimal state sequence. Let us assume that this second-order HMM has the state space S, which is S = {[s.sub.1], [s.sub.2]} and the possible symbols per observation which is O = {[v.sub.1], [v.sub.2], [v.sub.3]}.

The graphical representation of the first-order HMM that is used for the numerical example in this section is given in Figure 1. The second-order HMM in Figure 2 is developed based on the first-order HMM in Figure 1 which has two states and three observational symbols. A HMM of any order has the parameters of [lambda] = ([pi], A, B) where [pi] is the initial state probability vector, is the state transition probability matrix, and B is the emission probability matrix. Note that the matrices of A and B whose components are indicated as [mathematical expression not reproducible] and [mathematical expression not reproducible] where 1 [less than or equal to] [i.sub.1], [i.sub.2], [i.sub.3] [less than or equal to] 2 and 1 [less than or equal to] m [less than or equal to] 3 can be obtained from Figures 1 and 2. However, the initial state probability vector is not shown in the above graphical diagrams.

The initial state probability vectors for the first-order and second-order HMM are shown as follows:

[[pi].sub.1] = [0.5 0.5], [[pi].sub.2] = [0.5 0], [[pi].sub.3] = [0.5 0]. (46)

[mathematical expression not reproducible] is the initial state probability vector for the first-order HMM and [mathematical expression not reproducible] and [mathematical expression not reproducible] are the initial state probability vectors for the second-order HMM where [mathematical expression not reproducible], and 1 [less than or equal to] [i.sub.2] [less than or equal to] 2.

The state transition probability matrices for the first-order and second-order HMMs are shown as follows:

[mathematical expression not reproducible]. (47)

[mathematical expression not reproducible] is the state transition probability matrix for the first-order HMM and [mathematical expression not reproducible] and [mathematical expression not reproducible] are the state transition probability matrices for the second-order HMM where [mathematical expression not reproducible], and 1 [less than or equal to] [i.sub.1], [i.sub.2] [less than or equal to] 2

The emission probability matrices for the first-order and second-order HMMs are shown as follows:

[mathematical expression not reproducible]. (48)

[mathematical expression not reproducible] is the emission probability matrix for the first-order HMM and [mathematical expression not reproducible], and [mathematical expression not reproducible] are the emission probability matrices for the second-order HMM where [mathematical expression not reproducible], and [mathematical expression not reproducible].

The following is the observational sequence that we used for illustrating our extended algorithm:

[o.sub.1:6] = ([o.sub.1] = [v.sub.1], [o.sub.2] = [v.sub.1], [o.sub.3] = [v.sub.3], [o.sub.4] = [v.sub.2], [o.sub.5] = [v.sub.3], [o.sub.6] = [v.sub.1]). (49)

We applied our extended algorithm for computing the optimal state sequence based on state entropy. The computed value of the state entropy is shown in Figure 3.

The total entropy after each time step is displayed at the bottom of Figure 3. For example, after receiving the second observation, that is, [o.sub.1:2] = ([o.sub.1] = [v.sub.1], [o.sub.2] = [v.sub.1]), it has produced two state sequences which are [q.sub.1:2] = ([q.sub.1] = [s.sub.1], [q.sub.2] = [s.sub.1]) and [q.sub.1:2] = ([q.sub.1] = [s.sub.1], [q.sub.2] = [s.sub.2]) as shown by the bold arrows. Each possible state sequence has a probability of 0.5; that is, [[??].sub.2](1, 1) = [[??].sub.2](1, 2) = 0.5, and hence the total entropy is 1 bit.

However, after receiving the fourth observation, that is, [o.sub.1:4] = ([o.sub.1] = [v.sub.1], [o.sub.2] = [v.sub.1], [o.sub.3] = [v.sub.3], [o.sub.4] = [v.sub.2]), it has produced one state sequence which is [q.sub.1:4] = ([q.sub.1] = [s.sub.1], [q.sub.2] = [s.sub.2], [q.sub.3] = [s.sub.1], [q.sub.4] = [s.sub.2]) as shown by the dashed arrow. This possible state sequence has a probability of 1, that is, [[??].sub.4](1, 2) = 1, and hence the total entropy is 0 bit. After receiving the sixth observation, this second-order HMM has produced only one possible optimal state sequence; that is, [q.sub.1:6] = ([q.sub.1] = [s.sub.1], [q.sub.2] = [s.sub.2], [q.sub.3] = [s.sub.1], [q.sub.4] = [s.sub.2], [q.sub.5] = [s.sub.1], [q.sub.6] = [s.sub.2]) with the total entropy of 0 which indicates that there is no uncertainty.

3. Entropy-Based Decoding Algorithm with a Reduction Approach

The extended entropy-based Viterbi algorithm in Section 2 has addressed only the issue related to memory space but this algorithm is not able to overcome the computational complexity. In this section, we introduce an efficient entropy-based algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA) to compute the optimal state sequence based on entropy of any generalized HHMM. This algorithm has addressed issues related to memory space and computational complexity.

3.1. Transforming a High-Order HMM with a Single Observational Sequence. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires O(T[[??].sup.2]) calculations, where [??] is the number of states in an equivalent first-order model and T is the length of observational sequence.

The transformation of a generalized high-order HMM into an equivalent first-order HMM is based on Hadar and Messer's method [7].

Suppose [[??].sub.t] = ([q.sub.t], [q.sub.t-1], ..., [q.sub.t-k+1]) for 1 [less than or equal to] t [less than or equal to] T; then the hidden state process [{[[??].sub.t]}.sup.T.sub.t=1] of the kth-order Markov chain satisfies

[mathematical expression not reproducible], (50)

where [[??].sub.t] takes the value from the set of hidden states [??] = [{[s.sub.i], i = 1, 2, ..., N}.sup.k]. Hence, the hidden state process [{[[??].sub.t]}.sup.T.sub.t=1] forms the first-order HMM Markov process.

The observation process [{[o.sub.t]}.sup.T.sub.t=1] satisfies

[mathematical expression not reproducible]. (51)

Hence, the hidden state process [{[[??].sub.t]}.sup.T.sub.t=1] and the observation process [{[o.sub.t]}.sup.T.sub.t=1] form the first-order HMM.

Remarks 9. (i)

[mathematical expression not reproducible], (52)

where [mathematical expression not reproducible] and [mathematical expression not reproducible]. (ii)

[mathematical expression not reproducible], (53)

where [mathematical expression not reproducible].

Note that we assume [mathematical expression not reproducible] and [mathematical expression not reproducible].

The elements for the transformation of a high-order into an equivalent first-order discrete HMM are as follows:

(i) Number of distinct hidden states, [??]

(ii) Number of distinct observed symbols, M

(iii) Length of observational sequence, T

(iv) Observational sequence, O = {[o.sub.t], t = 1, 2, ..., T]

(v) Hidden state sequence, [??] = {[[??].sub.t], t = 1, 2, ..., T]

(vi) Possible values for each state, [??] = [{[s.sub.i], i = 1, 2, ..., N}.sup.k]

(vii) Possible symbols per observation, [??] = {[v.sub.w], w = 1, 2, ..., M}

(viii) Initial hidden state probability vector, [??] = {[[??].sub.i]}, and [[??].sub.i], is the probability that model will transit from state [mathematical expression not reproducible], where

[mathematical expression not reproducible] (54)

(ix) State transition probability matrix, [mathematical expression not reproducible] and [a.sub.ij] is the probability of a transition from state [mathematical expression not reproducible] at time t - 1 to state [mathematical expression not reproducible] at time t where

[mathematical expression not reproducible], (55)

where the first k - 1 entries of [[??].sub.i] are equal to the last k - 1 entries of [[??].sub.j]

(x) Emission probability matrix, [mathematical expression not reproducible], and [[??].sub.i]([v.sub.m]) is a probability of observing [v.sub.m] in state [mathematical expression not reproducible] at time t:

[mathematical expression not reproducible]. (56)

3.2. The Forward and Backward Probabilities Variables for the Transformed Model. In this subsection, we omit the derivations for forward and backward probability variables since the derivations are similar to the derivations in Section 2.2.

The forward recursion variable for the transformed model at time t is as follows:

[mathematical expression not reproducible]. (57)

The backward recursion variable for the transformed model at time t is as follows:

[mathematical expression not reproducible]. (58)

The normalized forward variable at time t is as follows:

[mathematical expression not reproducible], (59)

where [mathematical expression not reproducible].

The normalized backward variables at time t is as follows:

[mathematical expression not reproducible], (60)

where [mathematical expression not reproducible].

3.3. The Computation of the Optimal State Sequence for a HHMM. For EOFTA algorithm, we require state entropy variable, [H.sub.t]([[??].sub.j]), that can be computed recursively using the previous variable, [H.sub.t-1]([[??].sub.i]).

We define the state entropy variable as follows.

Definition 10. The state entropy variable, [H.sub.t]([[??].sub.j]), in an ordertransformation HMM, is the entropy of all the paths that lead to state of [[??].sub.j] at time t, given the observations [o.sub.1], [o.sub.2], ..., [o.sub.t]. It can be denoted as

[mathematical expression not reproducible]. (61)

From (61) at t = 1, we obtain the initial state entropy variable as

[H.sub.1]([[??].sub.j]) = 0. (62)

From (61) and (62), we obtain the recursion on the entropy for t = 2, ..., T - 1, and 1 [less than or equal to] i, j [less than or equal to] [??]

[mathematical expression not reproducible], (63)

where

[mathematical expression not reproducible]. (64)

The auxiliary probability [mathematical expression not reproducible] is required for our EOTFA algorithm. It can be computed as follows:

[mathematical expression not reproducible]. (65)

For the final process, we compute H([q.sub.1:T] | [o.sub.1:T]) which can be expanded as follows:

[mathematical expression not reproducible]. (66)

The basic entropy concept in (40) and the following basic properties of HMM are used for proving our Lemma 11. According to the transformation of a high-order into an equivalent first-order HMM, state [[??].sub.t-r], r [greater than or equal to] 2, and [[??].sub.t] are statistically independent given [[??].sub.t-1]. The same applies to [[??].sub.t- r], r [greater than or equal to] 2 and [o.sub.t] are statistically independent given [[??].sub.t-1].

The following proof is due to Hernando et al. [4].

Lemma 11. For the transformation of a high-order into an equivalent first-order HMM, the entropy of the state sequence up to time t - 2, given the states at time t - 1 and the observations up to time t - 1, is conditionally independent on the state and observation at time t

[mathematical expression not reproducible]. (67)

Proof.

[mathematical expression not reproducible]. (68)

Our EOTFA algorithm for computing the optimal state sequence is based on the normalized forward recursion variable, state entropy recursion variable, and auxiliary probability. From (59), (60), (61), (62), (63), and (66), we construct our EOTFA algorithm as follows.

(1) Initialization. For t = 1 and 1 [less than or equal to] j [less than or equal to] [??],

[mathematical expression not reproducible]. (69)

(2) Recursion. For t = 2, ..., T and 1 [less than or equal to] j [less than or equal to] [??],

[mathematical expression not reproducible]. (70)

(3) Termination

[mathematical expression not reproducible]. (71)

The direct evaluation algorithm, Hernando et al.'s algorithm, and our algorithm perform the computation of state entropy exponentially with respect to the order of HMM. Our algorithm proposes the transformation of a generalized high-order into an equivalent first-order HMM and then compute the state entropy based on the equivalent first-order model; hence our algorithm is the most efficient in which it requires O(T[[??].sup.2]) calculations as compared to the direct evaluation method which requires O([N.sup.T+k-1]) calculations and the extended algorithm which requires O(T[N.sup.k+1]) calculations where N is the number of states in a model, [??] is the number of states in an equivalent first-order model, T is the length of observational sequence, and k is the order of HMM.

3.4. Numerical Illustration for an Equivalent First-Order HMM. We consider the second-order HMM in Section 2.5 for illustrating our EOTFA algorithm in computing the optimal state sequence. According to our proposed novel algorithm, we first transformed the second-order HMM in Section 2.5 into the equivalent first-order HMM by using Hadar and Messer method [7]. The equivalent first-order HMM has the following model parameters [mathematical expression not reproducible], where [??] is the initial state probability vector, A is the state transition probability matrix, and [??] is the emission probability matrix.

[mathematical expression not reproducible]. (72)

Note that the above state transition probability and the emission probability matrices whose components are indicated as [mathematical expression not reproducible] and [mathematical expression not reproducible] where 1 [less than or equal to] [i.sub.1], [i.sub.2] [less than or equal to] 4 and 1 [less than or equal to] m [less than or equal to] 3 can be obtained from the graphical diagram in Figure 4.

The state space for the equivalent first-order HMM is [mathematical expression not reproducible], where [mathematical expression not reproducible], and [[??].sub.4] = [[s.sub.2], [s.sub.2]], and the possible symbols per observation are O = {[v.sub.1], [v.sub.2], [v.sub.3]]. Note that [mathematical expression not reproducible], where [mathematical expression not reproducible], where [mathematical expression not reproducible], and [mathematical expression not reproducible], where [mathematical expression not reproducible].

The equivalent first-order HMM was developed based on Hadar and Messer's method [7] is shown in Figure 4.

Secondly, the optimal state sequence is computed based on the equivalent first-order HMM by using our proposed algorithm. Finally, the optimal state sequence of the second-order HMM is inferred from the optimal state sequence from the equivalent first-order HMM.

The following is the observational sequence used for illustrating our algorithm:

[o.sub.1:6] = ([o.sub.1] = [v.sub.1], [o.sub.2] = [v.sub.1], [o.sub.3] = [v.sub.3], [o.sub.4] = [v.sub.2], [o.sub.5] = [v.sub.3], [o.sub.6] = [v.sub.1]). (73)

We applied our EOFTA algorithm for computing the optimal state sequence based on the state entropy. The computed value of state entropy is shown in Figure 5.

The total entropy after each time step for the transformed model, that is, the second-order transformed into the equivalent first-order HMM is displayed at the bottom of Figure 5. For example, this model has produced only one possible state sequence; that is, [mathematical expression not reproducible], as shown by the bold arrow with a probability of 1 after receiving the fifth observation. The total entropy is 0 at t = 5 which indicates that there is no uncertainty. After receiving the sixth observation, that is, [o.sub.1:6] = ([o.sub.1] = [v.sub.1], [o.sub.2] = [v.sub.1], [o.sub.3] = [v.sub.3], [o.sub.4] = [v.sub.2], [o.sub.5] = [v.sub.3], [o.sub.6] = [v.sub.1]), this equivalent first-order HMM has produced one possible optimal state sequence [mathematical expression not reproducible] which is similar to [q.sub.1:6] = ([q.sub.1] = [s.sub.1], [q.sub.2] = [s.sub.2], [q.sub.3] = [s.sub.1], [q.sub.4] = [s.sub.2], [q.sub.5] = [s.sub.1], [q.sub.6] = [s.sub.2]) that is produced by the second-order HMM in Section 2.5 with a total entropy of 0 which indicates that there is no uncertainty. As a result, the optimal state sequence of the high-order HMM is inferred from the optimal state sequence of the equivalent first-order HMM. Our proposed algorithm is based on the equivalent first-order HMM and only requires O(T[[??].sup.2]) computation and hence we can conclude that our EOTFA algorithm is more efficient.

4. Conclusion and Future Work

We have introduced a novel algorithm for computing the optimal state sequence for HHMM that requires O(T[[??].sup.2]) calculations and O([[??].sup.2]) memory space where [??] is the number of states in an equivalent first-order HMM and T is the length of observational sequence. This algorithm is to be running with Viterbi algorithm in tracking the optimal state sequence as well as the entropy of the distribution of the state sequence. We have developed this algorithm for the case of a generalized discrete high-order HMM. This research can be also extended for continuous high-order HMMs and these models are widely used in speech recognition.

https://doi.org/10.1155/2018/8068196

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

[1] V. Ciriza, L. Donini, J. Durand, and S. Girard, "Optimal timeouts for power management under renewal or hidden Markov processes for requests," Tech. Rep., 2011, http://hal.inria.fr/hal00412509/en.

[2] L. R. Rabiner, "Tutorial on hidden Markov models and selected applications in speech recognition," Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, 1989.

[3] J. G. Proakis and M. Salehi, Communications System Engineering, Prentice-Hall, Upper Saddler River, NJ, USA, 2002.

[4] D. Hernando, V. Crespi, and G. Cybenko, "Efficient computation of the hidden Markov model entropy for a given observation sequence," Institute of Electrical and Electronics Engineers Transactions on Information Theory, vol. 51, no. 7, pp. 2681-2685, 2005.

[5] G. S. Mann and A. McCallum, "Efficient computation of entropy gradient for semi-supervised conditional random fields," in Proceedings of the Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACL '07), pp. 109-112, Association for Computational Linguistics, Morristown, NJ, USA, April 2007.

[6] V. M. Ilic, "Entropy Semiring Forward-backward Algorithm for HMM Entropy Computation," 2011, https://arxiv.org/abs/1108.0347.

[7] U. Hadar and H. Messer, "High-order hidden Markov models--estimation and implementation," in Proceedings of the 15th IEEE/SP Workshop on Statistical Signal Processing (SSP '09), pp. 249-252, Wales, UK, September 2009.

[8] J. A. du Preez, "Efficient training of high-order hidden Markov models using first-order representations," Computer Speech and Language, vol. 12, no. 1, pp. 23-39, 1998.

[9] L. M. Lee and J. C. Lee, "A study on high-order hidden Markov Models and applications to speech recognition," in Proceedings of the 19th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, pp. 682-690, 2006.

[10] R. M. Altman, "Mixed hidden MARkov models: an extension of the hidden MARkov model to the longitudinal data setting," Journal of the American Statistical Association, vol. 102, no. 477, pp. 201-210, 2007.

[11] A. Spagnoli, R. Henderson, R. J. Boys, and J. J. Houwing-Duistermaat, "A hidden Markov model for informative dropout in longitudinal response data with crisis states," Statistics & Probability Letters, vol. 81, no. 7, pp. 730-738, 2011.

[12] T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley Series in Telecommunications and Signal Processing, John Wiley & Sons, New York, NY, USA, 2006.

Jason Chin-Tiong Chan (ID) (1) and Hong Choon Ong (ID) (2)

(1) Ted Rogers School of Management, Ryerson University, 350 Victoria St., Toronto, ON, Canada M5B 2K3

(2) School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Gelugor, Penang, Malaysia

Correspondence should be addressed to Jason Chin-Tiong Chan; chintiongjason.chan@ryerson.ca

Received 15 December 2017; Revised 12 February 2018; Accepted 27 February 2018; Published 2 May 2018

Academic Editor: Steve Su

Caption: Figure 1: The graphical diagram shows a first-order HMM with 2 states and 3 observational symbols.

Caption: Figure 2: The graphical diagram shows a second-order HMM with 2 states and 3 observational symbols.

Caption: Figure 3: The evolution of the trellis structure of the second-order HMM with the observation sequence [o.sub.1:6] = ([o.sub.1] = [v.sub.1], [o.sub.2] = [v.sub.1], [o.sub.3] = [v.sub.3], [o.sub.4] = [v.sub.2], [o.sub.5] = [v.sub.3], [o.sub.6] = [v.sub.1]).

Caption: Figure 4: The graphical diagram shows an equivalent first-order HMM.

Caption: Figure 5: The evolution of the trellis structure for a transformation of a second-order into an equivalent first-order HMM with the observation sequence [o.sub.1:6] = ([o.sub.1] = [v.sub.1], [o.sub.2] = [v.sub.1], [o.sub.3] = [v.sub.3], [o.sub.4] = [v.sub.2], [o.sub.5] = [v.sub.3], [o.sub.6] = [v.sub.1]).
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Chan, Jason Chin-Tiong; Ong, Hong Choon
Publication:Journal of Probability and Statistics
Date:Jan 1, 2018
Words:7062
Previous Article:Performance of Synthetic Double Sampling Chart with Estimated Parameters Based on Expected Average Run Length.
Next Article:A Simple Empirical Likelihood Ratio Test for Normality Based on the Moment Constraints of a Half-Normal Distribution.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |