By Z.H. Fu
https://fuzihaofzh.github.io/blog/
本文将总结一些与随机过程相关的重要题型和概念,涵盖了Markov链、可数Markov链、连续Markov链、Martingale以及一些常见模型。这些内容将有助于理解和解决与随机过程相关的问题。
在随机过程中,Markov链是一个关键的概念,我们讨论了如何根据给定的转移矩阵求稳态分布、计算状态转移概率、计算从特定状态出发到达目标状态的概率等问题。此外,我们还探讨了与Markov链相关的一些概念,如吸收态、recurrent态以及首次到达和改变的期望时间。
在可数Markov链方面,我们介绍了如何判定一个Markov链是transient或positive recurrent,以及如何求解极限概率。我们还讨论了Gambler’s Ruin问题、Alarm Clock问题等经典模型,并提供了相应的解决方法。
对于连续Markov链,我们研究了生成矩阵的构建、转移矩阵的计算以及稳态分布的求解方法。此外,我们还探讨了首次改变和到达的期望时间的计算。
最后,我们提到了Martingale概念,以及如何判断一个随机过程是否满足Martingale性质。
Markov
给定转移矩阵求稳态分布
直接解方程π P = π \pi P=\pi π P = π ,或者π ( I − P ) = 0 \pi (I-P)=0 π ( I − P ) = 0 。注意是左特征向量!!!
给定转移概率求稳态分布
建立差分方程,带入边界条件。f i = p f i + 1 + q f i − 1 f_i=pf_{i+1}+qf_{i-1} f i = p f i + 1 + q f i − 1 ⇒ f i + 1 − f i = q / p ( f i − f i − 1 ) , … , f N − f N − 1 = ( q / p ) N − 1 f 1 \Rightarrow f_{i+1}-f_i=q/p(f_i-f_{i-1}),\ldots, f_N-f_{N-1}=(q/p)^{N-1}f_1 ⇒ f i + 1 − f i = q / p ( f i − f i − 1 ) , … , f N − f N − 1 = ( q / p ) N − 1 f 1
给定转移矩阵求j → i j\rightarrow i j → i 次数
把i当做吸收态。去掉第i行,第i列得到剩余矩阵Q,则其余状态k访问状态j的次数期望为I + Q + Q 2 + … + Q ∞ = ( I − Q ) − 1 = M I+Q+Q^2+\ldots+Q^\infty=(I-Q)^{-1}=M I + Q + Q 2 + … + Q ∞ = ( I − Q ) − 1 = M 。计算出M后把第j行求和就相当于到达i前在其他状态的停留次数,而这个和就是到达i的期望次数。如果i = j i=j i = j 则要用稳态分布1 / π ( i ) 1/\pi(i) 1 / π ( i ) 来计算。
进入吸收态前的步数期望
如果从第i个节点出发,则把M = ( I − Q ) − 1 M=(I-Q)^{-1} M = ( I − Q ) − 1 的第i行加起来;
recurrent
从A出发,返回A的步数1 / π ( A ) 1/\pi(A) 1 / π ( A )
给定转移矩阵求j → i j\rightarrow i j → i 概率
把目标状态改为absorb,这样矩阵就成了[ [ I , 0 ] , [ S , Q ] ] [[I,0],[S,Q]] [ [ I , 0 ] , [ S , Q ] ] 。计算A = ( I − Q ) − 1 S = M S A=(I-Q)^{-1}S=MS A = ( I − Q ) − 1 S = M S ,这样A i j A_{ij} A i j 表示从i出发被j吸收的概率。
从k出发,到达i i i 前先到j j j 的概率
把i,j改为吸收态,然后构造上面那个矩阵,求A = ( I − Q ) − 1 S = M S A=(I-Q)^{-1}S=MS A = ( I − Q ) − 1 S = M S ,A k j A_{kj} A k j 即是所求。
验证sequence是否是Markov
P ( X i = 0 ) = 0.5 , P ( X i = 1 ) = 0.5 P(X_i=0)=0.5,P(X_i=1)=0.5 P ( X i = 0 ) = 0 . 5 , P ( X i = 1 ) = 0 . 5 Y i = X i + X i − 1 Y_i=X_i+X_{i-1} Y i = X i + X i − 1 。
不是Markov,因为如果Y n − 1 = 1 Y_{n-1}=1 Y n − 1 = 1 ,对于不同的Y n − 2 , … , Y 1 Y_{n-2},\ldots,Y_{1} Y n − 2 , … , Y 1 ,可以得到不同的X n − 1 X_{n-1} X n − 1 。而不同的X n − 1 X_{n-1} X n − 1 能导致不同的P ( Y n ∣ Y n − 1 ) P(Y_n|Y_{n-1}) P ( Y n ∣ Y n − 1 ) ,所以不是Markov。
例如:P ( Y 3 = 2 ∣ Y 2 = 1 , Y 1 = 0 ) P(Y_3=2|Y_2=1,Y_1=0) P ( Y 3 = 2 ∣ Y 2 = 1 , Y 1 = 0 ) 我们得出X 0 = 0 , X 1 = 0 , X 2 = 1 X_0=0,X_1=0,X_2=1 X 0 = 0 , X 1 = 0 , X 2 = 1 ,这种情况下P ( Y 3 = 2 ∣ . . . ) = 1 / 2 P(Y_3=2|...)=1/2 P ( Y 3 = 2 ∣ . . . ) = 1 / 2 。另一方面,考虑P ( Y 3 = 2 ∣ Y 2 = 1 , Y 1 = 2 ) P(Y_3=2|Y_2=1,Y_1=2) P ( Y 3 = 2 ∣ Y 2 = 1 , Y 1 = 2 ) ,我们有X 0 = 1 , X 1 = 1 , X 2 = 0 X_0=1,X_1=1,X_2=0 X 0 = 1 , X 1 = 1 , X 2 = 0 ,这种情况下P ( Y 3 = 2 ∣ . . . ) = 0 P(Y_3=2|...)=0 P ( Y 3 = 2 ∣ . . . ) = 0 。
可数Markov
判定Markov链是transient
任意选定一个状态z(可以直接选0状态),令α ( x ) \alpha(x) α ( x ) 表示从x出发能到达z的概率。Markov链是transient的iff (1) 0 ≤ α ( x ) ≤ 1 0\le \alpha(x) \le 1 0 ≤ α ( x ) ≤ 1 (2) α ( z ) = 1 , inf { α ( x ) , x ∈ S } = 0 \alpha(z)=1,\inf \{\alpha(x), x\in S\}=0 α ( z ) = 1 , inf { α ( x ) , x ∈ S } = 0 (3)α ( x ) = ∑ y ∈ S p ( x , y ) α ( y ) , x ≠ z \alpha(x)=\sum_{y\in S}p(x,y)\alpha(y),x\ne z α ( x ) = ∑ y ∈ S p ( x , y ) α ( y ) , x = z 。对每一个状态x列出(3)得到recursive equation解出α ( x ) , x ∈ S \alpha(x),x\in S α ( x ) , x ∈ S 的表达式,然后去带入其他条件,得到参数的具体值。
判断Markov链是positive recurrent
解稳态分布π ( x ) = ∑ y ∈ S π ( y ) p ( y , x ) \pi(x)=\sum_{y\in S} \pi(y)p(y,x) π ( x ) = ∑ y ∈ S π ( y ) p ( y , x ) ,解出π ( n ) = x n π ( 0 ) \pi(n)=x_n \pi(0) π ( n ) = x n π ( 0 ) ,∵ ∑ n = 1 ∞ π ( n ) = 1 \because \sum_{n=1}^\infty \pi(n)=1 ∵ ∑ n = 1 ∞ π ( n ) = 1 所以级数{ x n } \{x_n\} { x n } 收敛级数收敛来确定{ x n } \{x_n\} { x n } 中的系数。
例:求极限概率(limiting probability)
p ( n , n + 1 ) = 2 / 3 , p ( n , 0 ) = 1 / 3 p(n,n+1)=2/3,p(n,0)=1/3 p ( n , n + 1 ) = 2 / 3 , p ( n , 0 ) = 1 / 3 。
解:带入π ( x ) = ∑ y ∈ S π ( y ) p ( y , x ) \pi(x)=\sum_{y\in S} \pi(y)p(y,x) π ( x ) = ∑ y ∈ S π ( y ) p ( y , x ) 有π ( n ) = 2 / 3 π ( n − 1 ) ⇒ π ( n ) = ( 2 / 3 ) n π ( 0 ) \pi(n)=2/3\pi(n-1) \Rightarrow \pi(n)=(2/3)^n\pi(0) π ( n ) = 2 / 3 π ( n − 1 ) ⇒ π ( n ) = ( 2 / 3 ) n π ( 0 ) , $\because \sum_{i=1}^\infty \pi(i)=1 \Rightarrow $ π ( n ) = 1 / 3 ( 2 / 3 ) n \pi(n)=1/3(2/3)^n π ( n ) = 1 / 3 ( 2 / 3 ) n
例:证明recurrent
p ( n , n + 1 ) = p , p ( n , 0 ) = 1 − p p(n,n+1)=p,p(n,0)=1-p p ( n , n + 1 ) = p , p ( n , 0 ) = 1 − p ,证明是recurrent。
首先证明0是recurrent的。第一次返回0的次数记为N j = min { n > 0 : X n = j } N_j=\min\{n>0:X_n=j\} N j = min { n > 0 : X n = j } ,则如果是recurrent,则P ( N 0 < ∞ ) = 1 P(N_0<\infty)=1 P ( N 0 < ∞ ) = 1 ,P ( N 0 = n ∣ X 0 = 0 ) = p n − 1 ( 1 − p ) P(N_0=n|X_0=0)=p^{n-1}(1-p) P ( N 0 = n ∣ X 0 = 0 ) = p n − 1 ( 1 − p ) ∴ f 0 = ∑ k P ( N 0 = k ∣ X 0 = 0 ) = P ( N 0 < ∞ ∣ X 0 = 0 ) = ∑ n = 1 ∞ p n − 1 ( 1 − p ) = 1 / ( 1 − p ) ( 1 − p ) = 1 \therefore f_0=\sum_k P(N_0=k|X_0=0)=P(N_0<\infty|X_0=0)=\sum_{n=1}^\infty p^{n-1}(1-p)=1/(1-p)(1-p)=1 ∴ f 0 = ∑ k P ( N 0 = k ∣ X 0 = 0 ) = P ( N 0 < ∞ ∣ X 0 = 0 ) = ∑ n = 1 ∞ p n − 1 ( 1 − p ) = 1 / ( 1 − p ) ( 1 − p ) = 1
法2:利用recurrent的定义∑ n = 1 ∞ P j j n = ∞ \sum_{n=1}^\infty P_{jj}^n=\infty ∑ n = 1 ∞ P j j n = ∞ . 见2D random walk。
连续Markov
求生成矩阵
a ( x , y ) a(x,y) a ( x , y ) 表示单位时间内,从x x x 转移到y y y 的速率,矩阵A为生成矩阵,其中A i j = a ( i , j ) , ( i ≠ j ) A_{ij}=a(i,j),(i\ne j) A i j = a ( i , j ) , ( i = j ) ,对角线上的元素使得每行和为0,表示从状态x离开的总速率。
求转移矩阵
P t = e t A P_t=e^{tA} P t = e t A ,就是先分解成特征值(一般都会有一个0特征值)P D P − 1 PDP^{-1} P D P − 1 ,然后做指数运算P d i a g { e d 1 t , … , e d n t } P − 1 P diag\{e^{d_1t},\ldots,e^{d_nt}\}P^{-1} P d i a g { e d 1 t , … , e d n t } P − 1
求稳态分布
可以让P t P_t P t 中t → ∞ t\rightarrow \infty t → ∞ ,也可以根据生成矩阵和状态概率分布的关系p ′ ( t ) = p ( t ) A p'(t)=p(t)A p ′ ( t ) = p ( t ) A 直接求解π A = 0 \pi A=0 π A = 0 。
首次改变期望时间
从状态1出发,求首次改变期望的时间。由生成矩阵A 11 A_{11} A 1 1 表示离开1的速率,所以期望时间为1 / 3 1/3 1 / 3 。
首次到达期望时间
从状态1出发,求首次到达状态4的时间的期望。从状态1到达状态4的速率是A 14 A_{14} A 1 4 ,因此期望的时间是1 / A 14 1/A_{14} 1 / A 1 4
泊松过程和离散马尔科夫组成连续马尔科夫
Markov:X n X_n X n , Poisson:N ( t ) N(t) N ( t ) ,证Z ( t ) = X N ( t ) Z(t)=X_{N(t)} Z ( t ) = X N ( t ) CTMC.
证明: P { Z ( s + t ) = j ∣ Z ( s ) = i , Z ( u ) = k , 0 ≤ u ≤ s } P\{Z(s+t)=j|Z(s)=i,Z(u)=k,0\le u\le s\} P { Z ( s + t ) = j ∣ Z ( s ) = i , Z ( u ) = k , 0 ≤ u ≤ s } = P { X N ( s + t ) = j ∣ X N ( s ) = i , X N ( u ) = k , 0 ≤ u ≤ s } =P\{X_{N(s+t)}=j|X_{N(s)}=i,X_{N(u)}=k,0\le u\le s\} = P { X N ( s + t ) = j ∣ X N ( s ) = i , X N ( u ) = k , 0 ≤ u ≤ s } = P { X N ( s + t ) = j ∣ X N ( s ) = i , 0 ≤ u ≤ s } =P\{X_{N(s+t)}=j|X_{N(s)}=i,0\le u\le s\} = P { X N ( s + t ) = j ∣ X N ( s ) = i , 0 ≤ u ≤ s } (∵ N ( s ) ≥ N ( u ) \because N(s) \ge N(u) ∵ N ( s ) ≥ N ( u ) , then, by property of DTMC) = P { Z ( s + t ) = j ∣ Z ( s ) = i } =P\{Z(s+t)=j|Z(s)=i\} = P { Z ( s + t ) = j ∣ Z ( s ) = i }
Martingale
判断是否是Martingale
计算E [ Y n + 1 ∣ X 1 , … , X n ] E[Y_{n+1}|X_1,\ldots,X_n] E [ Y n + 1 ∣ X 1 , … , X n ] 看是否等于Y n Y_n Y n . 注意加上E [ ∣ Z n ∣ ] < ∞ E[|Z_n|]<\infty E [ ∣ Z n ∣ ] < ∞
常见模型
M/G/1 Queue
Interarrival times is exponential. let X n X_n X n denote the number of customers left behind by the n n n th departure. Also, let Y n Y_n Y n denote the number of customers arriving during the service period of the ( n + 1 ) (n + 1) ( n + 1 ) st customer. We have X n + 1 = X n − 1 + Y n ( X n > 0 ) ; = Y n ( X n = 0 ) X_{n+1}=X_n-1+Y_n (X_n > 0);=Y_n (X_n=0) X n + 1 = X n − 1 + Y n ( X n > 0 ) ; = Y n ( X n = 0 ) . P { Y n = j } = ∫ 0 ∞ e − λ x ( λ x ) j / j ! d G ( x ) P\{Y_n=j\}=\int_0^\infty e^{-\lambda x}(\lambda x)^j/j!dG(x) P { Y n = j } = ∫ 0 ∞ e − λ x ( λ x ) j / j ! d G ( x ) . ⇒ \Rightarrow ⇒ P 0 j = ∫ 0 ∞ e − λ x ( λ x ) j / j ! d G ( x ) P_{0j}=\int_0^\infty e^{-\lambda x}(\lambda x)^j/j!dG(x) P 0 j = ∫ 0 ∞ e − λ x ( λ x ) j / j ! d G ( x ) . P i j = ∫ 0 ∞ e − λ x ( λ x ) j − i + 1 / ( j − i + 1 ) ! d G ( x ) P_{ij}=\int_0^\infty e^{-\lambda x}(\lambda x)^{j-i+1}/(j-i+1)!dG(x) P i j = ∫ 0 ∞ e − λ x ( λ x ) j − i + 1 / ( j − i + 1 ) ! d G ( x ) ,( j ≥ i − 1 , i ≥ 1 ) (j\ge i-1, i\ge 1) ( j ≥ i − 1 , i ≥ 1 ) , $P_{ij}=0 $(otherwise).
G/M/1 Queue
Let X n X_n X n denote the number of customers in the system as seen by the n n n th arrival. P i , i + 1 − j = ∫ 0 ∞ e − μ t ( μ t ) j / j ! d G ( x ) P_{i,i+1-j}=\int_0^\infty e^{-\mu t}(\mu t)^j/j!dG(x) P i , i + 1 − j = ∫ 0 ∞ e − μ t ( μ t ) j / j ! d G ( x ) , j = 0 , ⋯ , i j=0,\cdots,i j = 0 , ⋯ , i . P i 0 = ∫ 0 ∞ ∑ k = i + 1 ∞ e − μ t ( μ t ) k / k ! d G ( x ) , i ≥ 0 P_{i0}=\int_0^\infty \sum_{k=i+1}^\infty e^{-\mu t}(\mu t)^k/k!dG(x), i\ge 0 P i 0 = ∫ 0 ∞ ∑ k = i + 1 ∞ e − μ t ( μ t ) k / k ! d G ( x ) , i ≥ 0
Gambler’s Ruin Problem
p 0 , 0 = p N , N = 1 , p i , i + 1 = p = 1 − p i , i − 1 p_{0,0}=p_{N,N}=1,p_{i,i+1}=p=1-p_{i,i-1} p 0 , 0 = p N , N = 1 , p i , i + 1 = p = 1 − p i , i − 1 解:Let f i f_i f i denote the probability that starting with i, the fortune will eventually reach N. We obtain: f i = p f i + 1 + q f i − 1 f_i=pf_{i+1}+qf_{i-1} f i = p f i + 1 + q f i − 1 ⇒ f i + 1 − f i = q / p ( f i − f i − 1 ) , … , f N − f N − 1 = ( q / p ) N − 1 f 1 \Rightarrow f_{i+1}-f_i=q/p(f_i-f_{i-1}),\ldots, f_N-f_{N-1}=(q/p)^{N-1}f_1 ⇒ f i + 1 − f i = q / p ( f i − f i − 1 ) , … , f N − f N − 1 = ( q / p ) N − 1 f 1 ⇒ f i − f 1 = f 1 [ q / p + ( q / p ) 2 + ⋯ + ( q / p ) i − 1 ] \Rightarrow f_i-f_1=f_1[q/p+(q/p)^2+\cdots+(q/p)^{i-1}] ⇒ f i − f 1 = f 1 [ q / p + ( q / p ) 2 + ⋯ + ( q / p ) i − 1 ] ⇒ f i = ( 1 − ( q / p ) i ) / ( 1 − ( q / p ) N ) , i f p ≠ 1 / 2 ; = i / N e l i f p = 1 / 2 \Rightarrow f_i=(1-(q/p)^i)/(1-(q/p)^N),if p\ne 1/2;=i/N elif p=1/2 ⇒ f i = ( 1 − ( q / p ) i ) / ( 1 − ( q / p ) N ) , i f p = 1 / 2 ; = i / N e l i f p = 1 / 2 .终止为0的概率为1 − f i 1-f_i 1 − f i
Alarm Clock Problem
n个闹钟服从b 1 , … , b n b_1,\ldots,b_n b 1 , … , b n 的指数分布。P ( T i > t ) = exp ( − b i t ) P(T_i>t)=\exp(-b_i t) P ( T i > t ) = exp ( − b i t ) 。随机变量T = min { T 1 , … , T n } T=\min\{T_1,\ldots,T_n\} T = min { T 1 , … , T n } P ( T ≥ t ) = P ( T 1 ≥ t ) P ( T 2 ≥ t ) ⋯ P ( T n ≥ t ) = exp ( − ( b 1 + ⋯ + b n ) t ) P(T\ge t)=P(T_1\ge t)P(T_2\ge t)\cdots P(T_n\ge t)=\exp(-(b_1+\cdots+b_n)t) P ( T ≥ t ) = P ( T 1 ≥ t ) P ( T 2 ≥ t ) ⋯ P ( T n ≥ t ) = exp ( − ( b 1 + ⋯ + b n ) t ) // T 1 T_1 T 1 先响概率:P ( T 1 = T ) = ∫ 0 ∞ P ( T 2 > t , ⋯ , T n > t ) P ( T 1 = t ) d t P(T_1=T)=\int_0^\infty P(T_2>t,\cdots,T_n>t)P(T_1=t)dt P ( T 1 = T ) = ∫ 0 ∞ P ( T 2 > t , ⋯ , T n > t ) P ( T 1 = t ) d t = ∫ 0 ∞ exp ( − ( b 2 + ⋯ + b n ) t ) b 1 exp ( − b 1 t ) d t = b 1 / ( b 1 + ⋯ + b n ) =\int_0^\infty \exp(-(b_2+\cdots+b_n)t)b_1\exp(-b_1 t)dt=b_1/(b_1+\cdots+b_n) = ∫ 0 ∞ exp ( − ( b 2 + ⋯ + b n ) t ) b 1 exp ( − b 1 t ) d t = b 1 / ( b 1 + ⋯ + b n )
Absolute Value of Simple Random Walk
S n = ∑ 1 n X n S_n=\sum_1^nX_n S n = ∑ 1 n X n , P { X i = 1 } = p P\{X_i =1\}=p P { X i = 1 } = p , P { X i = − 1 } = 1 − p = q P\{X_i =-1\}=1-p=q P { X i = − 1 } = 1 − p = q . Let j = max { k : 0 ≤ k ≤ n , i k = 0 } j=\max\{k:0\le k\le n,i_k=0\} j = max { k : 0 ≤ k ≤ n , i k = 0 } , ∴ P { S n = i ∣ ∣ S n ∣ = i , ⋯ , ∣ S 1 ∣ = i 1 } = \therefore P\{S_n=i||S_n|=i,\cdots,|S_1|=i_1\}= ∴ P { S n = i ∣ ∣ S n ∣ = i , ⋯ , ∣ S 1 ∣ = i 1 } = P { S n = i ∣ ∣ S n ∣ = i , ⋯ , ∣ S j ∣ = 0 } P\{S_n=i||S_n|=i,\cdots,|S_j|=0\} P { S n = i ∣ ∣ S n ∣ = i , ⋯ , ∣ S j ∣ = 0 } , ∴ P ( S n = i ∣ . . . ) = p ( n − j ) / 2 + i / 2 q ( n − j ) / 2 − i / 2 \therefore P(S_n=i|...)=p^{(n-j)/2+i/2}q^{(n-j)/2-i/2} ∴ P ( S n = i ∣ . . . ) = p ( n − j ) / 2 + i / 2 q ( n − j ) / 2 − i / 2 ∴ P ( S n = − i ∣ . . . ) = p ( n − j ) / 2 − i / 2 q ( n − j ) / 2 + i / 2 \therefore P(S_n=-i|...)=p^{(n-j)/2-i/2}q^{(n-j)/2+i/2} ∴ P ( S n = − i ∣ . . . ) = p ( n − j ) / 2 − i / 2 q ( n − j ) / 2 + i / 2
P ( S n = i ∣ . . . ) = P ( S n = i ∣ . . . ) / ( P ( S n = i ∣ . . . ) + P ( S n = − i ∣ . . . ) ) P(S_n=i|...)=P(S_n=i|...)/(P(S_n=i|...)+P(S_n=-i|...)) P ( S n = i ∣ . . . ) = P ( S n = i ∣ . . . ) / ( P ( S n = i ∣ . . . ) + P ( S n = − i ∣ . . . ) ) = p i / ( p i + q i ) =p^i/(p^i+q^i) = p i / ( p i + q i ) ∴ P { ∣ S n + 1 ∣ = i + 1 ∣ ∣ S n ∣ = i , ∣ S n − 1 , ⋯ , ∣ S 1 ∣ } \therefore P\{|S_{n+1}|=i+1||S_n|=i,|S_{n-1},\cdots,|S_1|\} ∴ P { ∣ S n + 1 ∣ = i + 1 ∣ ∣ S n ∣ = i , ∣ S n − 1 , ⋯ , ∣ S 1 ∣ } = P { S n + 1 = i + 1 ∣ S n = i } p i / ( p i + q i ) =P\{S_{n+1}=i+1|S_n=i\}p^i/(p^i+q^i) = P { S n + 1 = i + 1 ∣ S n = i } p i / ( p i + q i ) + P { S n + 1 = − ( i + 1 ) ∣ S n = − i } q i / ( p i + q i ) +P\{S_{n+1}=-(i+1)|S_n=-i\}q^i/(p^i+q^i) + P { S n + 1 = − ( i + 1 ) ∣ S n = − i } q i / ( p i + q i ) = ( p i + 1 + q i + 1 ) / ( p i + q i ) =(p^{i+1}+q^{i+1})/(p^i+q^i) = ( p i + 1 + q i + 1 ) / ( p i + q i ) ∴ P i , i + 1 = ( p i + 1 + q i + 1 ) / ( p i + q i ) \therefore P_{i,i+1}=(p^{i+1}+q^{i+1})/(p^i+q^i) ∴ P i , i + 1 = ( p i + 1 + q i + 1 ) / ( p i + q i ) = 1 − P i , i − 1 , i > 0 =1-P_{i,i-1},i>0 = 1 − P i , i − 1 , i > 0 P 01 = 1 P_{01}=1 P 0 1 = 1
Simple Random Walk
P i , i + 1 = p = 1 − P i , i − 1 P_{i,i+1}=p=1-P_{i,i-1} P i , i + 1 = p = 1 − P i , i − 1 . Firstly, step number cannot be even, i.e. P 00 2 n + 1 = 0 P_{00}^{2n+1}=0 P 0 0 2 n + 1 = 0 . P 00 2 n = ( 2 n n ) p n ( 1 − p ) n P_{00}^{2n}=\binom{2n}{n}p^n(1-p)^n P 0 0 2 n = ( n 2 n ) p n ( 1 − p ) n = ( 2 n ) ! n ! n ! ( p ( 1 − p ) ) n =\frac{(2n)!}{n!n!}(p(1-p))^n = n ! n ! ( 2 n ) ! ( p ( 1 − p ) ) n . By Stirling, P 00 2 n ∼ ( 4 p ( 1 − p ) ) n / π n P_{00}^{2n}\sim (4p(1-p))^n/\sqrt{\pi n} P 0 0 2 n ∼ ( 4 p ( 1 − p ) ) n / π n . $\therefore $ if ∑ n = 1 ∞ ( 4 p ( 1 − p ) ) n / π n = ∞ \sum_{n=1}^\infty (4p(1-p))^n/\sqrt{\pi n}=\infty ∑ n = 1 ∞ ( 4 p ( 1 − p ) ) n / π n = ∞ (only when p = 1 / 2 p=1/2 p = 1 / 2 ) it will be recurrent. otherwise transient.
Calculate steady state probabilities
联立两个方程π = π P ; ∑ j π j = 1 \pi=\pi P;\sum_j\pi_j=1 π = π P ; ∑ j π j = 1 . 第一个方程有一个是冗余的,通过消元,将第一个方程化成如下形式
π 0 = 0.080 π 0 + 0.632 π 1 + 0.264 π 2 + 0.080 π 3 , π 1 = 0.184 π 0 + 0.368 π 1 + 0.368 π 2 + 0.184 π 3 , π 2 = 0.368 π 0 + 0.368 π 2 + 0.368 π 3 , π 3 = 0.368 π 0 + 0.368 π 3 , 1 = π 0 + π 1 + π 2 + π 3 . \begin{aligned}
&\pi_0 = &0.080 &\pi_0 + &0.632 &\pi_1 + &0.264 &\pi_2 + &0.080 &\pi_3,\\
&\pi_1 = &0.184 &\pi_0 + &0.368 &\pi_1 + &0.368 &\pi_2 + &0.184 &\pi_3,\\
&\pi_2 = &0.368 &\pi_0 + & & &0.368 &\pi_2 + &0.368 &\pi_3,\\
&\pi_3 = &0.368 &\pi_0 + & & & & &0.368 &\pi_3,\\
&1 = & &\pi_0 + & &\pi_1 + & &\pi_2 + & &\pi_3.
\end{aligned} π 0 = π 1 = π 2 = π 3 = 1 = 0 . 0 8 0 0 . 1 8 4 0 . 3 6 8 0 . 3 6 8 π 0 + π 0 + π 0 + π 0 + π 0 + 0 . 6 3 2 0 . 3 6 8 π 1 + π 1 + π 1 + 0 . 2 6 4 0 . 3 6 8 0 . 3 6 8 π 2 + π 2 + π 2 + π 2 + 0 . 0 8 0 0 . 1 8 4 0 . 3 6 8 0 . 3 6 8 π 3 , π 3 , π 3 , π 3 , π 3 .
然后逐个用π 0 \pi_0 π 0 去表示其他的,最后带入最后一个方程求出。
已知p i j p_{ij} p i j ,求μ i j \mu_{ij} μ i j
利用μ i j = 1 + ∑ k ≠ j p i k μ k j \mu_{ij}=1+\sum_{k\ne j}p_{ik}\mu_{kj} μ i j = 1 + ∑ k = j p i k μ k j
计算absorb概率
假设马氏链中存在absorb state记该状态为k k k ,求从某点出发,被k k k 吸住的概率。denote f i k f_{ik} f i k as the probability of starting from i i i and end in k k k . The recursive equation is f i k = ∑ j p i j f j k f_{ik}=\sum_j p_{ij}f_{jk} f i k = ∑ j p i j f j k (i到k的概率等于i先到j,再从j到k的概率对所有j求和)