Научная статья на тему 'DETERMINATION OF VITERBI PATH FOR 3 HIDDEN AND 5 OBSERVABLE STATES USING HIDDEN MARKOV MODEL'

DETERMINATION OF VITERBI PATH FOR 3 HIDDEN AND 5 OBSERVABLE STATES USING HIDDEN MARKOV MODEL Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
29
6
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Hidden markov model / Viterbi algorithm / Hidden states / Observable states

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — T. Raja Jithendar, M. Tirumala Devi, G. Saritha

Hidden markov model (HMM) is a statistical markov model in which the system being modeled and is assumed to be a markov process with unobservable (i.e., Hidden) states. In HMM, the state is not directly visible but the output depend on the state is visible. Each state has a probability distribution over the possible output tokens. The model is referred to as a hidden markov model even if these parameters are known exactly. The viterbi is one of the estimate underlying state path in hidden markov models. In this paper, viterbi path is derived using hidden markov model.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «DETERMINATION OF VITERBI PATH FOR 3 HIDDEN AND 5 OBSERVABLE STATES USING HIDDEN MARKOV MODEL»

DETERMINATION OF VITERBI PATH FOR 3 HIDDEN AND 5 OBSERVABLE STATES USING HIDDEN

MARKOV MODEL

T. Raja jithendar, M. Tirumala Devi, and G. Saritha

Department of Mathematics, Kakatiya University, Warangal, Telangana, India-506009 [email protected], [email protected], [email protected]

Abstract

Hidden markov model (HMM) is a statistical markov model in which the system being modeled and is assumed to be a markov process with unobservable (i.e., Hidden) states. In HMM, the state is not directly visible but the output depend on the state is visible. Each state has a probability distribution over the possible output tokens. The model is referred to as a hidden markov model even if these parameters are known exactly. The viterbi is one of the estimate underlying state path in hidden markov models. In this paper, viterbi path is derived using hidden markov model.

Keywords: Hidden markov model, Viterbi algorithm, Hidden states, Observable states.

I. Introduction

Viterbi algorithm is a dynamic programming algorithm to obtain the maximum posterior probability estimate of the most likely sequence of hidden states and viterbi path results in a sequence of observed events, especially in the context of hidden markov model. Antibiotics are medicines that fight against bacterial infections in people and animals. They work by killing the bacteria or by making it hard for the bacteria to grow and multiply. Antibiotics can be taken in different ways orally (by mouth) this could be, tablets, pills, capsules, or liquids. Another way to take is topically, this might be a cream, ointment, eye drops, or ear drops, through an injection intramuscular or intravenously, this is usually used for more serious infections. Maria Luz Gamiz [7] et al applied hidden markov models in reliability and maintenance. Janani Kalyanam [3] discussed the probabilistic algorithm for list viterbi decoding. Viterbi A.J [10] derived error bounds for convolution codes and asymptotically optimum decoding algorithm. Shinghal R[9] et al described the modification of the viterbi algorithm formally, and a measure of its complexity is derived. The modified algorithm uses heuristic to limit the search through a directed graph or trellis. G Saritha[2] et al discussed the reliability for 4-modular and 5-modular redundancy system by using Markov technique.

II. Mathematical model

Assume there are M possible states to choose from, the state could be any one of {1,2,3 ,...,m}. The transition probability should be quantified from state i to state j asa^. The transition could happen from any one of the M possible states to another one of the M possible states, there are in total M*M possibilities. They can be arranged in the following matrix representation, known as state transition matrix S.

S =

au al2

a , a .

ml m2

a

lm

a

a

Here ay,i=1,2,3,....m, j=1,2,3,...m are probability values between 0 and 1, each row has to be summed to 1, and a^ can be written as

atj = P[Xk = j | Xk= i]

Therefore,

= S^i A, i.e., matrix multiplication of previous state with transition,

P[Xk = 1] . „ P[Xk = 2]

where S =

P[Xk = m]

The emission matrix B is the probabilities of a state i in an observed value j. The observation has K possible values {0,1, 2,...,k}.

B =

b11 b12 b21 b22

b , b n

m1 m 2

b

1k

J2k

b

mk

Here bj represents the probability of state i to emit observable j and can be written as

bj = P[Yk = j | Xk = i]

Therefore,

°l = SIB, where Ok =

pY = 1]

PYk = 2]

P[Yk = m]

Initial state probability distribution is denoted by n0and is given by n0 = [P1P2 ...Pi ...Pm]

The transition matrix is a regular matrix whose elements are probabilities of one state to another state. The probability between hidden states to observable states is called emission probability, the matrix representation of emission probabilities is called emission matrix. Here the rows represent hidden states and columns represent observable states.

a21 a22

In this paper the hidden state space S= {Nausea, Diarrhea, Stomach pain} and the observable state space B={Amoxicillin + Potassium Clavunate(ap),Cefixime(ce), Amoxicillin(am), Azithromycin (az), Ciprofloxacin(cp)}.

The initial probability = [0.45, 0.3, 0.25]

The transition probability matrix between hidden states is S=

N D S

N 0.5 D 0.2 S 0.1

0.2 0.3 0.6 0.2 0.2 0.7

The emission matrix is B

ap ce am az cp

N 0.33 0.22 0.19 0.14 0.12

D 0.35 0.15 0.15 0.25 0.1

S 0.15 0.35 0.25 0.15 0.1

II. Viterbi Algorithm

For the large number of possibilities forward and backward algorithm cannot be used to get the maximum probability. Viterbi algorithm is used to obtain the maximum posterior probabilities of the most likely sequence of hidden states. The total possibilities are m = nt = 35 = 243, where n is the number of hidden states and t is the number of observations. Out of these possibilities, let us consider the viterbi path 2 4 1 3 5. Then

P(2, N) = P(2/N) P(N) = 0.099 P(2, D) = P(2/ D)P(D) = 0.045 P(2, S) = P(2/S)P(S) = 0.0875

And the viterbi probabilities are

V (1) = 0.099, V (2) = 0.045, V (3) = 0.0875,

Similarly

P(4,N) = P(4/N)P(N/N) = 0.07, P(4,D) = 0.05, P(4,S) = 0.045 P(4,N) = P(4/N)P(N/D) = 0.028, P(4,D) = 0.15, P(4,S) = 0.03 P(4, N) = P(4/N )P( N / S) = 0.014, P(4, D) = 0.05, P(4, S) = 0.105

= 0.00693, F2 (2) = 0.00675, F2 (3) = 0.0091875,

And the viterbi probabilities are

0.099 x 0.07 = 0.00693,

V (1) = max J 0.045 x 0.028 = 0.0012,

0.0875 x 0.014 = 0.001225^

P(1, N) = P(1 / N)P(N/ N) = 0.165, P(1, D) = 0.07, P(1, S) = 0.045

P(1, N) = P(1/ N)P(N/ D) = 0.066, P(1, D) = 0.21, P(1, S) = 0.03

P(1, N) = P(1/N) P(N / S) = 0.033, P(1, D) = 0.07, P(1, S) = 0.105

And the viterbi probabilities are

V (1) = 0.00114345, V (2) = 0.0014175, V (3) = 0.0009646875, P(3,N) = P(3/N)P(N/N) = 0.095, P(3,D) = 0.03, P(3,S) = 0.075 P(3, N) = P(3/N)P(N / D) = 0.038, P(3, D) = 0.09, P(3, S) = 0.05 P(3, N) = P(3/N )P( N / S) = 0.019, P(3, D) = 0.03, P(3, S) = 0.175

And the viterbi probabilities are

V (1) = 0.0001086278, V (2) = 0.000127575, V (3) = 0.0001688203 P(5, N) = P(5 / N )P( N / N) = 0.06, P(5, D) = 0.02, P(5, S) = 0.03 P(5, N) = P(5/ N )P( N / D) = 0.024, P(5, D) = 0.06, P(5, S) = 0.02 P(5, N) = P(5/ N) P( N / N) = 0.012, P(5, D) = 0.02, P(5, S) = 0.07

And the viterbi probabilities are

V (1) = 0.0000065177, V (2) = 0.0000076545, V (3) = 0.0000118174,

Now,

Max{Vi(1), Vi(2), Vi(3)}=0.099 Max{V2(1), V2(2), V2(3)}=0.0091875 Max{V3(1), V3(2), V3(3)}=0.0014175 Max{V4(1), V4(2), V4(3)}=0.0001688203 Max{V5(1), V5(2), V5(3)}=0.000011817

The above probabilities are shown in the following diagram and it gives the final path

TV

0.105 0.05

0.045 0.099

D

0.014 0.03 0.028 0.045

£

0.05 0.07

END

Figure 1: The different states probabilites

0.0875

S

Figure 2: viterbi probabilities of hidden states

Figure 3: Maximum probabilities of viterbi path

From figure 2, we observe that, for tablet Amoxicillin + Potassium Clavunate (apc), the probability for Nausea, Diarrhea, Stomach pain are maximum and the tablet Cefixime(cf), takes the second place and the reaming are almost equal.

From figure 3, we observe that the Maximum probabilities of viterbi path for different antibiotic. Amoxicillin + Potassium Clavunate (ap) gets maximum and the second maximum probability is for Cefixime(ce), and the reaming are almost equal.

IV. Conclusion

In this paper, I used the technique of hidden markov model for 5 observable, 3 hidden states to find the viterbi path. By the path, we observe that the most common side effects of antibiotics is stomach pain. The maximum path for hidden markov model is Nausea, Stomach pain, diarrhea, Stomach pain, Stomach pain with maximum probability 0.0000118174. I.Acknowledgement

The data is taken from the article "Availability, Prices and Affordability of Antibiotics Stocked by Informal Providers in Rural India".

References

[1] E Balagurusamy [1984]. "Reliability Engineering". Tata McGraw-Hill Publishing Company Limited.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[2] G. Saritha, T.SumathiUmaMaheswari, M.TirumalaDevi [2020]."Reliability Model for 4-Modular and 5-Modular Redundancy System by using Markov Technique". Advances in Intelligent Systems and Computing. Vol. 979,pp.329-338

[3] Janani Kalyanam [2009]. "Probabilistic algorithm for list viterbi decoding[Thesis]". University of Wisconsin, Madison.

[4] Jeffrey W. Miller[2016]. "Hidden Markov Models". Lecture Notes on Advanced Stochastic Modeling. Duke University, Durham, NC.

[5] K. K. Aggarwal[1993]. "Reliability Engineering". Kluwer Academic Publishers.

[6] L. S. Srinath[2005]. "Reliability Engineering". Fourth Edition, Affiliated East-West Press Private, Limited, New Delhi.

[7] Maria Luz Gamiz, Nikolaos Limnios, Mariadel Carmen Segovia-Garcia [2022]. "Hidden markov models in reliability and maintenance". European Journal of Operational Research, Vol. 304, pp:1242-1245.

[8] Meenakshi Gautham, Rosalind Miller, Sonia Rego and Catherine Goodman [2022]. "Availability, Prices and Affordability of Antibiotics Stocked by Informal Providers in Rural India: A Cross-Sectional Survey". https://www.mdpi.com/journal/antibiotics

[9] Shinghal, R. and Godfried T. Toussaint, [1979]. "Experiments in text recognition with the modified Viterbi algorithm". IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. PAMI-l, pp. 184-193.

[10] Viterbi A.J [1967]. "Error bounds for convolutional codes and asymptotically optimum decoding algorithm". IEEE Transactions on Information Theory. Vol.13, Iss 2.

[11] Z. Yang, Y. Li, W. Chen and Y. Zheng [2012] "Dynamic hand gesture recognition using hidden Markov models," 7th International Conference on Computer Science & Education (ICCSE). Melbourne, VIC, Australia, 2012, pp. 360-365.

i Надоели баннеры? Вы всегда можете отключить рекламу.