In the realm of stochastic processes and queueing theory, understanding various mathematical models and their applications is essential for tackling real-world problems efficiently. This comprehensive “Stochastic Process Cheat Sheet” provides a valuable resource for grasping the fundamentals of Poisson processes, Markov chains, continuous-time Markov chains, Martingales, Queueing Theory, and more. This guide covers a wide range of topics, from basic definitions to intricate equations and practical examples.
Poisson Process
counting process independent increments: if the numbers of events that occur in disjoint time intervals are independent. P(N(s+t)−N(s)∣N(s))=P(N(s+t)−N(s)),P(Xi∣Xj)=P(Xi)
counting process stationary increments: if the distribution of the number of events that occur in any interval of time depends only on the length of the time interval.
Define 1
The counting process {N(t),t≥0} is said to be a Poisson process having rate λ,λ>0, if
(i) N(0)=0;
(ii) The process has independent increments;
(iii) The number of events in any interval of length t is Poisson distributed with mean λt Thatis,for all s,t≥0, P{N(t+s)−N(s)=n}=e−λtn!(λt)n,n=0,1,⋯
From (iii), we have: E[N(t)]=λt
Define 2
(i) N(0)=0
(ii) The process has stationary and independent increments.
(iii) $P{N(h) = 1} = \lambda h + o(h) $
(iv) P{N(h)>2}=o(h)
Interarrival
Xndenote the time between the (n−1)st and the nth event. P{X1>t}=P{N(t)=0}=e−λt, P{X2>t∣X1=s}=e−λt. Xi i.i.d, mean=1/λ r.v. Sn is the arrival time of the nth event, Sn=∑i=1nXi,n≥1. Sn has a gamma distribution with parameters n and λ,P{Sn≤t}=P{N(t)≥n}=∑j=n∞e−λtj!(λt)j⇒f(t)=λe−λt(n−1)!(λt)n−1,t≥0
Conditional Distribution of the Arrival Times
Known at t, one event happen, give the distribution of the time that the event happen. It’s uniform dist.P{X1<s∣N(t)=1}=s/t
Generalization: Given that N(t)=n, the n arrival times S1,...,Sn, have the same distribution as the order statistics corresponding to n independent random variables uniformly distributed on the interval (0,t) f(t1,...,tn)=P{ti≤Si≤ti+h,i=1,...,n∣N(t)=n}/(h1h2⋯hn)=n!/tn
Nonhomogeneous
(i) N(0)=0
(ii) {N(t),t≥0} has independent increments
(iii) P{N(t+h)−N(t)≥2}=o(h)
(iv) P{N(t+h)−N(t)=1}=λ(t)h+o(h)
let m(t)=∫0tλ(s)ds, we have P{N(t+s)−N(t)=n}=exp{−(m(t+s)−m(t))}[m(t+s)−m(t)]n/n!. The form is substitute λt with m(t+s)−m(t). We can think of the non-homogeneous process as being a random sample from a homogeneous Poi sson process with probability λλ(t).
Compound Poisson Random Variables
W=∑i=1NXi,N∼poisson(λ),
generating function of W is E[etW]=∑n=0∞E[etW∣N=n]P{N=n}=exp{λ(ϕX(t)−1)},ϕX(t)=E[etXi] E[W]=λE[X],Var(W)=λE[X2]
property: E[Wh(W)]=λE[Xh(W+X)], we can get: E[W]=λE[X],E[W2]=λE[X2]+λ2(E[X])2,E[W3]=λE[X3]+3λ2E[X]E[X2]+λ3(E[X])3
Markov Chains
Pij from i to j. ∑j=0∞Pij=1
Chapman-Kolmogorov Equations
Pijn=P{Xn+m=j∣Xm=i},n≥0,i,j≥0. We have Pijn+m=∑k=0∞PiknPkjm in matrix form P(n+m)=P(n)P(m). Proof: Pijn+m=P{Xn+m=j∣X0=i}=∑k=0∞P{Xn+m=j,Xn=k∣X0=i}=∑k=0∞P{Xn+m=j∣Xn=k,X0=i}P{Xn=k∣X0=i}=...
accessible : j is accessible from state i if ∃n≥0,s.t.Pijn>0.
communicate: states i and j accessible to each other. (i)i↔i; (ii)ifi↔j,thenj↔i; (iii) ifi↔jandj↔k,theni↔k(Proof: Pijn+m=∑k=0∞PiknPkjm≥PijnPjkm>0).
class : Two states that communicate are said to be in the same class.
irreducible : if there is only one class in a markov chain.
period: State i is said to have period d if Piin=0 when ever n is not divisible by d and d is the greatest integer with this property. If i↔j, then d(i)=d(j). Perid is a class property.
aperiodic: A state with period 1 is said to be aperiodic.
first passage time:fijn to be the probability that, starting in i, the first transition into j occurs at time n. fij=∑n=1∞fijn.
In recursive form: fijn=∑k=jpikfkj(n−1).
If ∑n=1∞fij(n)<1, means there exists probability that start from i and never arrive at j. Denote the expected first passage time from i to j as μij=∞(if∑n=1∞fij(n)<1);=∑n=1∞nfij(n)(if∑n=1∞fij(n)=1)
recursive form: μij=1+∑k=jpikμkj
recurrent: fjj=1,
transient otherwise. j is recurrent iff ∑n=1∞Pjjn=∞. recurrent implies that the visit time to the node is infinity: E[number of visit to j∣X0=j]=∞. In a finite-state Markov chain not all states can be transient. If i↔j and i is recurrent, then j is recurrent. Same with transient.
If i↔j and j is recurrent, then fij=1.
transient: iff there exists a state j(j=i) that is accessible from state i but not vice versa, that is, state i is not accessible from state j.
upon entering this state, the process might never return to this state again. Transient is a class property.
recurrent: not transient. is a class property.
absorbing state: A state that is never left once the process enters that state.
Limit Theorems
μjj denote the expected number of transitions needed to return to state j. μjj=∞(j is transient);=∑n=1∞nfjjn(j is recurrent)πj=limn→∞Pjjnd(j)μii=1/πi
if i and j communicate, then
(i) P{limt→∞Nj(t)/t=1/μjj∣X0=i}=1
(ii)limn→∞∑k=1nPijk/n=1/μjj
(iii) If j is aperiodic, then limn→∞Pijn=1/μjj
(iv) If j has period d, then limn→∞Pijnd=d/μjj
positive recurrent : μjj<∞. Is a class property. πj>0
null recurrent :μjj=∞. Is a class property.πj=0
ergodic: positive recurrent states that are aperiodic are called ergodic states.
stationary: A probability distribution {Pj,j≥0} is said to be stationary for the Markov chain if Pj=∑i=0∞PiPij,j≥0. If the initial probability distribution is the stationary distribution, then Xn will have the same distribution for all n.
Continuous Time Markov Chains
Markovian property
a) indepent of previous states: P{X(t+s)=j∣X(s)=i,X(r)=x(r)}=P{X(t+s)=j∣X(s)=i}∀r<s,t>0
b) Independent of initial time P{X(t+s)=j∣X(s)=i}=P{X(t)=j∣X(0)=i}
If, PX(t+s)=j∣X(s)=i is independent of s, then the continuous-time Markov chain is said to have stationary or homogeneous transition probabilities.
transition probability functionpij(t)=P{X(t)=j∣X(0)=i}, limt→0pij(t)=1(i=j);=0(i=j)
Time staying at some state By Markovian, $P{T_i>t+s|T_i>s}=P{T_i>t} $ ⇒P{Ti≤t}=1−e−qit, qi=1/E[Ti] is leaving times in unit time,denote qij the times leaving i and enter j,we have qi=∑j=iqij.
Chapman-Kolmogorovpij(t)=∑k=0Mpik(s)pkj(t−s)
stead statelimt→∞pij(t)=πj, πj=∑k=0Mπipij(t),Usually use a more easy equations:πjqj=∑i=jπiqij,∑j=0Mπj=1
Martingales
Martingales: {Zn,n≥1} is a martingale if E[∣Zn∣]<∞,∀n and E[Zn+1∣Z1,Z2,⋯,Zn]=Zn.
take expectation get: E[Zn+1]=E[Zn] then E[Zn]=E[Z1],∀n
Conditional Expectation: E(aY1+bY2∣Fn)=aE(Y1∣Fn)+bE(Y2∣Fn); If Y is the function about X1,…,Xn,then E[Y∣Fn]=Y;if Y is an arbitary r.v. ,Z is a measurable r.v. about X1,…,Xn,then E[YZ∣Fn]=ZE[Y∣Fn]。
0 mean independent r.v. sum is martingale: Xi is independent r.v. with 0 mean; Zn=∑i=1nXi is a martingale. Since E[Zn+1∣Z1,⋯,Zn]=E[Zn+Xn+1∣Z1,⋯,Zn]=E[Zn∣Z1,⋯,Zn]+E[Xn+1∣Z1,⋯,Zn]=Zn+E[Xn+1]=Zn,note: Zn=E[Zn∣Z1,⋯,Zn],∵Zn is given
1 mean independent r.v. product is martingale: Xi is independent r.v. with 1 mean; Zn=∏i=1nXi is a martingale. Since E[Zn+1∣Z1,⋯,Zn]=E[ZnXn+1∣Z1,⋯,Zn]=ZnE[Xn+1∣Z1,⋯,Zn]=ZnE[Xn+1]=Zn
Queueing Theory
Distribution of interarrival times/Distribution of service times/Number of servers
Example: M/M/s,M/G/1。M=exponential distribution (Markovian);D=degenerate distribution (constant times);Ek=Erlang distribution (shape parameter k);G=general distribution (any arbitrary distribution allowed)
Stochastic Process Models
M/G/1 Queue
Interarrival times is exponential. let Xn denote the number of customers left behind by the nth departure. Also, let Yn denote the number of customers arriving during the service period of the (n+1)st customer. We have Xn+1=Xn−1+Yn(Xn>0);=Yn(Xn=0). P{Yn=j}=∫0∞e−λx(λx)j/j!dG(x). ⇒P0j=∫0∞e−λx(λx)j/j!dG(x). Pij=∫0∞e−λx(λx)j−i+1/(j−i+1)!dG(x),(j≥i−1,i≥1), $P_{ij}=0 $(otherwise).
G/M/1 Queue
Let Xn denote the number of customers in the system as seen by the nth arrival. Pi,i+1−j=∫0∞e−μt(μt)j/j!dG(x), j=0,⋯,i. Pi0=∫0∞∑k=i+1∞e−μt(μt)k/k!dG(x),i≥0
Gambler’s Ruin Problem
p0,0=pN,N=1,pi,i+1=p=1−pi,i−1
Solution: Let fi denote the probability that starting with i, the fortune will eventually reach N. We obtain: fi=pfi+1+qfi−1⇒fi+1−fi=q/p(fi−fi−1),…,fN−fN−1=(q/p)N−1f1⇒fi−f1=f1[q/p+(q/p)2+⋯+(q/p)i−1]⇒fi=(1−(q/p)i)/(1−(q/p)N),ifp=1/2;=i/Nelifp=1/2. The probability of stop at 0 is 1−fi
Alarm Clock Problem
n clock are exponential distributed with parameters : b1,…,bn, P(Ti>t)=exp(−bit). R.v. T=min{T1,…,Tn}P(T≥t)=P(T1≥t)P(T2≥t)⋯P(Tn≥t)=exp(−(b1+⋯+bn)t)
Probability that T1 ring first :P(T1=T)=∫0∞P(T2>t,⋯,Tn>t)P(T1=t)dt=∫0∞exp(−(b2+⋯+bn)t)b1exp(−b1t)dt=b1/(b1+⋯+bn)
Absolute Value of Simple Random Walk
Sn=∑1nXn, P{Xi=1}=p, P{Xi=−1}=1−p=q. Let j=max{k:0≤k≤n,ik=0}, ∴P{Sn=i∣∣Sn∣=i,⋯,∣S1∣=i1}=P{Sn=i∣∣Sn∣=i,⋯,∣Sj∣=0}, ∴P(Sn=i∣...)=p(n−j)/2+i/2q(n−j)/2−i/2∴P(Sn=−i∣...)=p(n−j)/2−i/2q(n−j)/2+i/2 P(Sn=i∣...)=P(Sn=i∣...)/(P(Sn=i∣...)+P(Sn=−i∣...))=pi/(pi+qi)∴P{∣Sn+1∣=i+1∣∣Sn∣=i,∣Sn−1,⋯,∣S1∣}=P{Sn+1=i+1∣Sn=i}pi/(pi+qi)+P{Sn+1=−(i+1)∣Sn=−i}qi/(pi+qi)=(pi+1+qi+1)/(pi+qi)∴Pi,i+1=(pi+1+qi+1)/(pi+qi)=1−Pi,i−1,i>0P01=1
Simple Random Walk
Pi,i+1=p=1−Pi,i−1. Firstly, step number cannot be even, i.e. P002n+1=0. P002n=(n2n)pn(1−p)n=n!n!(2n)!(p(1−p))n. By Stirling, P002n∼(4p(1−p))n/πn. $\therefore $ if ∑n=1∞(4p(1−p))n/πn=∞ (only when p=1/2) it will be recurrent. otherwise transient.