背景
隐马尔可夫模型关注的三个问题中,第一个是求模型观测序列的概率。
暴力求解
已知HMM模型的参数
λ
=
[
A
,
B
,
π
]
\lambda = [{\bf{A,B,\pi }}]
λ=[A,B,π],
A
{\bf{A}}
A是隐藏状态转移概率矩阵,
B
{\bf{B}}
B是观测状态概率矩阵。对于隐藏状态的初始概率分布记作
π
{\bf{\pi }}
π。已知观测序列
O
=
{
o
1
,
o
2
,
⋯
,
o
i
,
⋯
,
o
M
}
O = \{ {o_1},{o_2}, \cdots ,{o_i}, \cdots ,{o_M}\}
O={o1,o2,⋯,oi,⋯,oM},现在我们需要求出
P
(
O
∣
λ
)
P(O|\lambda )
P(O∣λ)出现的条件概率。首先已经知道
B
{\bf{B}}
B ,即隐藏状态到观测状态的概率,已经知道
A
{\bf{A}}
A,即隐藏态之间的转移概率。然后,我们可以直接使用暴力求解的方式。暴力求解过程如下所示:
(1) 任意隐藏序列出现的概率表示为:
P
(
I
∣
λ
)
=
π
i
1
a
i
1
i
2
a
i
2
i
3
⋯
a
i
T
−
1
i
T
P(I|\lambda ) = {\pi _{{i_1}}}{a_{{i_1}{i_2}}}{a_{{i_2}{i_3}}} \cdots {a_{{i_{T - 1}}{i_T}}}
P(I∣λ)=πi1ai1i2ai2i3⋯aiT−1iT
(2) 已知隐藏序列,求观察序列的概率,表示为:
P
(
O
∣
I
,
λ
)
=
b
i
1
(
o
1
)
b
i
2
(
o
2
)
⋯
b
i
T
(
o
T
)
P(O|I,\lambda ) = {b_{{i_1}}}({o_1}){b_{{i_2}}}({o_2}) \cdots {b_{{i_T}}}({o_T})
P(O∣I,λ)=bi1(o1)bi2(o2)⋯biT(oT)
(3) 隐藏序列和观察序列的联合概率表示为:
P
(
O
,
I
∣
λ
)
=
P
(
I
∣
λ
)
P
(
O
∣
I
,
λ
)
=
π
i
1
b
i
1
(
o
1
)
a
i
1
i
2
b
i
2
(
o
2
)
⋯
a
i
T
−
1
i
T
b
i
T
(
o
T
)
P(O,I|\lambda ) = P(I|\lambda )P(O|I,\lambda ) = {\pi _{{i_1}}}{b_{{i_1}}}({o_1}){a_{{i_1}{i_2}}}{b_{{i_2}}}({o_2}) \cdots {a_{{i_{T - 1}}{i_T}}}{b_{{i_T}}}({o_T})
P(O,I∣λ)=P(I∣λ)P(O∣I,λ)=πi1bi1(o1)ai1i2bi2(o2)⋯aiT−1iTbiT(oT)
(4) 求边缘概率分布,即观测序列
O
O
O在模型
λ
\lambda
λ下的概率。
P
(
O
∣
λ
)
=
∑
I
P
(
O
,
I
∣
λ
)
=
∑
i
1
,
i
2
,
⋯
,
i
T
π
i
1
b
i
1
(
o
1
)
a
i
1
i
2
b
i
2
(
o
2
)
⋯
a
i
T
−
1
i
T
b
i
T
(
o
T
)
P(O|\lambda ) = \sum\limits_I {P(O,I|\lambda )} = \sum\limits_{{i_1},i{}_2, \cdots ,{i_T}} {{\pi _{{i_1}}}{b_{{i_1}}}({o_1}){a_{{i_1}{i_2}}}{b_{{i_2}}}({o_2}) \cdots {a_{{i_{T - 1}}{i_T}}}{b_{{i_T}}}({o_T})}
P(O∣λ)=I∑P(O,I∣λ)=i1,i2,⋯,iT∑πi1bi1(o1)ai1i2bi2(o2)⋯aiT−1iTbiT(oT)
暴力求解的方法仅仅适用于隐藏状态极少的模型,如果隐藏状态过多,会导致计算量非常的庞大, 隐藏状态是未知的,需要考虑所有的隐藏状态。状态数为
M
M
M,那么将会有
M
T
{M^T}
MT时间的复杂度,
T
T
T表示隐藏序列的长度。总的时间复杂度为
T
M
T
T{M^T}
TMT。
前向算法求HMM观测序列概率
前向算法的本质是属于动态规划,通过子问题找到全局的最优解,子问题在这里被称为局部状态。对于前向算法,局部状态为前向概率,前向概率指给定时刻下从隐藏态到观察态的概率。 定义
t
t
t时刻的隐藏状态的前向概率为:
α
1
(
i
)
=
π
i
b
i
(
o
1
)
,
i
=
1
,
2
,
⋯
,
N
{\alpha _1}(i) = {\pi _i}{b_i}({o_1}),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i = 1,2, \cdots ,N
α1(i)=πibi(o1),i=1,2,⋯,N
然后,递推
t
+
1
,
t
+
2
,
⋯
,
T
t + 1,t + 2, \cdots ,T
t+1,t+2,⋯,T时刻的概率:
α
t
+
1
(
i
)
=
[
∑
j
=
1
N
α
t
(
j
)
a
j
i
]
b
i
(
o
t
+
1
)
,
i
=
1
,
2
,
⋯
,
N
{\alpha _{t + 1}}(i) = \left[ {\sum\limits_{j = 1}^N {{\alpha _t}(j){a_{ji}}} } \right]{b_i}({o_{t + 1}}),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i = 1,2, \cdots ,N
αt+1(i)=[j=1∑Nαt(j)aji]bi(ot+1),i=1,2,⋯,N
最后,计算观测序列在给定模型下的概率:
P
(
O
∣
λ
)
=
∑
i
=
1
N
α
T
(
i
)
P(O|\lambda ) = \sum\limits_{i = 1}^N {{\alpha _T}(i)}
P(O∣λ)=i=1∑NαT(i)
前向算法求解实例
给定三个盒子,每个盒子里面都有两种颜色的球,分别为红色和白色。三个盒子中,球的数量分别是:
盒子名称1号盒子2号盒子3号盒子红色球数量547白色球数量563
从不同的盒子中取球,并且从1号盒子取球的概率为0.2,从2号盒子取球的概率是0.4,从3号盒子取球的概率是0.4, 总体概率为1。 即初始的状态分布:
∏
=
(
0.2
,
0.4
,
0.4
)
T
\prod = {(0.2,0.4,0.4)^T}
∏=(0.2,0.4,0.4)T
其次,状态转移概率矩阵为:
A
=
(
0.5
0.2
0.3
0.3
0.5
0.2
0.2
0.3
0.5
)
{\bf{A}} = \left( {\begin{matrix} {0.5}&{0.2}&{0.3}\\ {0.3}&{0.5}&{0.2}\\ {0.2}&{0.3}&{0.5} \end{matrix}} \right)
A=⎝⎛0.50.30.20.20.50.30.30.20.5⎠⎞
观测状态概率矩阵为:
B
=
(
0.5
0.5
0.4
0.6
0.7
0.3
)
{\bf{B}} = \left( {\begin{matrix} {0.5}&{0.5}\\ {0.4}&{0.6}\\ {0.7}&{0.3} \end{matrix}} \right)
B=⎝⎛0.50.40.70.50.60.3⎠⎞
求解问题,给定观测序列{红,白,红},求出这个观测序列的概率是多少?
具体求解过程如下所示:
(1) 在第1时刻观察到红色球,此时计算三个状态的前向概率:
盒子1的前向概率:
α
1
(
1
)
=
π
1
b
1
(
o
1
)
=
0.2
×
0.5
=
0.1
{\alpha _1}(1) = {\pi _1}{b_1}({o_1}) = 0.2 \times 0.5 = 0.1
α1(1)=π1b1(o1)=0.2×0.5=0.1
盒子2的前向概率:
α
1
(
2
)
=
π
2
b
2
(
o
1
)
=
0.4
×
0.4
=
0.16
{\alpha _1}(2) = {\pi _2}{b_2}({o_1}) = 0.4 \times 0.4 = 0.16
α1(2)=π2b2(o1)=0.4×0.4=0.16
盒子3的前向概率:
α
1
(
3
)
=
π
3
b
3
(
o
1
)
=
0.4
×
0.7
=
0.28
{\alpha _1}(3) = {\pi _3}{b_3}({o_1}) = 0.4 \times 0.7 = 0.28
α1(3)=π3b3(o1)=0.4×0.7=0.28
(2) 依据递推式,递推第2时刻三个状态的前向概率,观察为白色球,
盒子1的前向概率:
α
2
(
1
)
=
[
∑
i
=
1
3
α
1
(
i
)
a
i
1
]
b
1
(
o
2
)
=
[
0.1
∗
0.5
+
0.16
∗
0.3
+
0.28
∗
0.2
]
×
0.5
=
0.077
{\alpha _2}(1) = \left[ {\sum\limits_{i = 1}^3 {{\alpha _1}(i){a_{i1}}} } \right]{b_1}({o_2}) = [0.1*0.5 + 0.16*0.3 + 0.28*0.2] \times 0.5 = 0.077
α2(1)=[i=1∑3α1(i)ai1]b1(o2)=[0.1∗0.5+0.16∗0.3+0.28∗0.2]×0.5=0.077
盒子2的前向概率:
α
2
(
2
)
=
[
∑
i
=
1
3
α
1
(
i
)
a
i
2
]
b
2
(
o
2
)
=
[
0.1
∗
0.2
+
0.16
∗
0.5
+
0.28
∗
0.3
]
×
0.6
=
0.1104
{\alpha _2}(2) = \left[ {\sum\limits_{i = 1}^3 {{\alpha _1}(i){a_{i2}}} } \right]{b_2}({o_2}) = [0.1*0.{\rm{2}} + 0.16*0.{\rm{5}} + 0.28*0.{\rm{3}}] \times 0.{\rm{6}} = 0.{\rm{1104}}
α2(2)=[i=1∑3α1(i)ai2]b2(o2)=[0.1∗0.2+0.16∗0.5+0.28∗0.3]×0.6=0.1104
盒子3的前向概率:
α
2
(
3
)
=
[
∑
i
=
1
3
α
1
(
i
)
a
i
3
]
b
3
(
o
2
)
=
[
0.1
∗
0.3
+
0.16
∗
0.2
+
0.28
∗
0.5
]
×
0.3
=
0.0606
{\alpha _2}({\rm{3}}) = \left[ {\sum\limits_{i = 1}^3 {{\alpha _1}(i){a_{i{\rm{3}}}}} } \right]{b_{\rm{3}}}({o_2}) = [0.1*0.{\rm{3}} + 0.16*0.{\rm{2}} + 0.28*0.{\rm{5}}] \times 0.{\rm{3}} = 0.{\rm{0606}}
α2(3)=[i=1∑3α1(i)ai3]b3(o2)=[0.1∗0.3+0.16∗0.2+0.28∗0.5]×0.3=0.0606
(3) 依据递推式,递推第3时刻三个状态的前向概率, 观察为红色球,
盒子1的前向概率:
α
3
(
1
)
=
[
∑
i
=
1
3
α
2
(
i
)
α
i
1
]
b
1
(
o
3
)
=
[
0.077
∗
0.5
+
0.1104
∗
0.3
+
0.0606
∗
0.2
]
×
0.5
=
0.04187
{\alpha _{\rm{3}}}(1) = \left[ {\sum\limits_{i = 1}^3 {{\alpha _2}(i){\alpha _{i1}}} } \right]{b_1}({o_3}) = [0.077*0.5 + 0.1104*0.3 + 0.0606*0.2] \times 0.5 = 0.04187
α3(1)=[i=1∑3α2(i)αi1]b1(o3)=[0.077∗0.5+0.1104∗0.3+0.0606∗0.2]×0.5=0.04187
盒子2的前向概率:
α
3
(
2
)
=
[
∑
i
=
1
3
α
2
(
i
)
α
i
2
]
b
2
(
o
3
)
=
[
0.077
∗
0.2
+
0.1104
∗
0.5
+
0.0606
∗
0.3
]
×
0.4
=
0.03551
{\alpha _{\rm{3}}}(2) = \left[ {\sum\limits_{i = 1}^3 {{\alpha _2}(i){\alpha _{i2}}} } \right]{b_2}({o_3}) = [0.077*0.2 + 0.1104*0.5 + 0.0606*0.3] \times 0.4 = 0.03551
α3(2)=[i=1∑3α2(i)αi2]b2(o3)=[0.077∗0.2+0.1104∗0.5+0.0606∗0.3]×0.4=0.03551
盒子3的前向概率:
α
3
(
3
)
=
[
∑
i
=
1
3
α
3
(
i
)
α
i
3
]
b
3
(
o
3
)
=
[
0.077
∗
0.3
+
0.1104
∗
0.2
+
0.0606
∗
0.5
]
×
0.7
=
0.05284
{\alpha _{\rm{3}}}(3) = \left[ {\sum\limits_{i = 1}^3 {{\alpha _3}(i){\alpha _{i3}}} } \right]{b_3}({o_3}) = [0.077*0.3 + 0.1104*0.2 + 0.0606*0.5] \times 0.7 = 0.05284
α3(3)=[i=1∑3α3(i)αi3]b3(o3)=[0.077∗0.3+0.1104∗0.2+0.0606∗0.5]×0.7=0.05284
最终,我们求出观测序列为{红,白,红}的概率为:
P
(
O
∣
λ
)
=
∑
i
=
1
3
α
3
(
i
)
=
0.13022
P(O|\lambda ) = \sum\limits_{i = 1}^3 {{\alpha _3}(i)} = 0.13022
P(O∣λ)=i=1∑3α3(i)=0.13022
后向算法求HMM观测序列概率
后向算法求HMM观测序列的概率和前向算法求HMM观测序列的概率是相似的。它们之间的区别主要在动态规划的递推式恰好是相反的。
定义
t
t
t时刻的隐藏状态的前向概率为:
β
T
(
i
)
=
1
,
i
=
1
,
2
,
⋯
,
N
{\beta _T}(i) = 1,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i = 1,2, \cdots ,N
βT(i)=1,i=1,2,⋯,N
然后,递推
T
−
1
,
T
−
2
,
⋯
,
1
T - 1,T - 2, \cdots ,1
T−1,T−2,⋯,1时刻各个隐藏状态的后向概率:
β
t
(
i
)
=
∑
j
=
1
N
a
i
j
b
j
(
o
t
+
1
)
β
t
+
1
(
j
)
,
i
=
1
,
2
,
⋯
,
N
{\beta _t}(i) = \sum\limits_{j = 1}^N {{a_{ij}}{b_j}({o_{t + 1}}){\beta _{t + 1}}(j)} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i = 1,2, \cdots ,N
βt(i)=j=1∑Naijbj(ot+1)βt+1(j),i=1,2,⋯,N
最后,计算观测序列在给定模型下的概率:
P
(
O
∣
λ
)
=
∑
i
=
1
N
π
i
b
i
(
o
1
)
β
1
(
i
)
P(O|\lambda ) = \sum\limits_{i = 1}^N {{\pi _i}{b_i}({o_1}){\beta _1}(i)}
P(O∣λ)=i=1∑Nπibi(o1)β1(i)
版权归原作者 Starry memory 所有, 如有侵权,请联系我们删除。