By Z.H. Fu
https://fuzihaofzh.github.io/blog/
The Goal of This Tutorial
EM (Expectation Maximization) algorithm is a widely used method when a model has hidden variables that cannot be observed from the dataset. In this tutorial, we focus on the outline and intuition of the EM algorithm. A Gaussian mixture model is studied as an example.
Gaussian Mixture Model
In the Gaussian mixture model, we assume that there are several generators which can generate samples from a Gaussian distribution. Each sample (denoted as x(i)=(x1,x2,...,xd)) of our data is sampled from one of the generators. We can only observe the position of each sample x and we don’t know for each sample x(i), which generator z(i) generate it. Here, z(i)={1,2,..,k} stand for the ID of each generator. For each generator j(j∈[1,k]), it generates samples by a Gaussian distribution with parameter μj,Σj. We use θ to denote all model parameters, θ=(μ1,μ2,...,μk,Σ1,Σ2,...,Σk,ϕ1,ϕ2,...,ϕk). Where ϕj is the probability of selecting jth generator and ∑j=ikϕj=1. The goal of Gaussian mixture model is to find a most suitable parameter θ to describe the data.
However, things get complicated, since we don’t know each sample belongs to which class z(i)=?. Some people often doubt the relation between model parameters and hidden variables. The difference is that for each single sample, the model parameter is the same θ but they can have different hidden viaravbles z(i).
General Problem
We abstract above problem into more general form. Suppose we have some samples, each sample i is composed of visible features x(i) and hidden features z(i). Our ultimate goal is to maximize the incomplete log likelihood:
logp(x∣θ)=log∫zp(x,z∣θ)dz=log∫zq(z∣x)q(z∣x)p(x,z∣θ)≥∫zq(z∣x)logq(z∣x)p(x,z∣θ)=L(q,θ) (Jensen’s inequality)=∫zq(z∣x)logp(x,z∣θ)−∫zq(z∣x)logq(z∣x) (The second term is not related to θ)
As shown above, ∫zq(z∣x)logp(x,z∣θ) is called expected complete log likelihood. L(q,θ) is the lower bound of the incomplete log likelihood. It is a lower bound of the incomplete log likelihood related to θ. If we maximize it, the incomplete log likelihood will also increase.
In the next step, we show how to choose q(z∣x). We claim that when q(z∣x)=p(z∣x,θ), the lower bound get the equal sign. We substitute q(z∣x)=p(z∣x,θ):
====L(q,θ)=∫zq(z∣x)logq(z∣x)p(x,z∣θ)∫zp(z∣x,θ)logp(z∣x,θ)p(x,z,θ)∫zp(z∣x,θ)logp(x∣θ)logp(x∣θ)∫zp(z∣x,θ)logp(x∣θ)
In above derivation, we show that when q(z∣x)=p(z∣x,θ), Jensen’s inequality gets equal. After we determined q(z∣x), the remaining issue is to find a θ that maximize the expected complete log likelihood. The step of determine q(z∣x) is call the E - step, and the procedure of maximizing the expected complete log likelihood is called the M-step.
Back to Gaussians Mixture Model
We use the EM algorithm to solve our Gaussian mixture model.
We denote $$\begin{aligned}w_j{(i)}&=p(z{(i)}=j|x^{(j)};\theta)\
&=\frac{p(z{(i)}=j)p(x|z{(i)}=j;\theta)}{\sum_{l=1}^k p(z{(i)}=l)p(x|z{(i)}=l;\theta)}\
p(z^{(i)}=j)&=\phi_j
\end{aligned}$$
wj(i) can be viewed as the responsibility that component k takes for ‘explaining’ the observation x. ϕj is the priori distribution of selecting jth Gaussians. The expected complete log likelihood is:
=i=1∑mz(i)∑q(z(i)∣x(i))logq(z(i)∣x(i))p(x(i),z(i);θ)i=1∑mj=1∑kwj(i)logwj(i)(2π)n/2∣Σj∣1/21exp(−21(x(i)−μj)TΣ−1(x(i)−μj))ϕj
We can calculate the derivative of it w.r.t. each element of μ (denoted as μl).
==∇μli=1∑mj=1∑kwj(i)logwj(i)(2π)n/2∣Σj∣1/21exp(−21(x(i)−μj)TΣ−1(x(i)−μj))ϕj−∇μli=1∑mj=1∑kwj(i)21(x(i)−μj)TΣ−1(x(i)−μj)i=1∑mj=1∑kwl(i)Σ−1(x(i)−μl)
Set this term equal to 0. We get:
μl=∑i=1mwl(i)∑i=1mwl(i)x(i)
Other parameters in θ can also be solved analytically by the same method.
Summary
- EM algorithm can be used to find the parameters of models dealing with data with missing variables.
- EM algorithm constructs the lower bound of the incomplete log likelihood. It maximizes the lower bound to increase the original incomplete log likelihood.
- EM algorithm has two steps, i.e. E-step and M-step. In E-step, we use the best estimation p(z∣x,θ) to estimate q(z∣x). In M-step, we maximize the parameters θ by setting the derivative of each parameter to 0.
Reference
- Ng, Andrew. “CS229 Lecture notes.” CS229 Lecture notes 1.1 (2000): 1-3.
- Xiaogang, Wang. “ENGG5202 Lecture notes.” Chapter4, Homework1-3.
- Bishop, Christopher M. Pattern recognition and machine learning. springer, 2006.