XBeta Wiki
Metropolis-Hastings algorithm

The Metropolis-Hastings algorithm, developed by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953) and generalized by Hastings (1970), is a Markov chain Monte Carlo method which allows for sampling from a distribution when traditional sampling methods such as transformation or inversion fail. It requires only being able to evaluate the density function. The normalizing constant need not be known. This algorithm is very general and gives rise to the Gibbs sampler as a special case.

Markov Chain Monte Carlo Simulation

Consider a Markov transition kernel P(x,A)P(x, A) for x dx \in \mathbb{R}^d and AA \in \mathcal{B}, where \mathcal{B} is the Borel σ\sigma-algebra on d\mathbb{R}^d. Two major concerns in Markov chain theory are whether there exists an invariant distribution and whether iterating the transition kernel converges to the invariant distribution. The invariant distribution π *\pi^* satisfies

π *(dy)= dP(x,dy)π(x)dx, \pi^*(dy) = \int_{\mathbb{R}^d} P(x, dy) \pi(x)\,dx,

where π\pi is the density of π *\pi^* with respect to Lebesgue measure. Let

P (n)(x,A)= dP (n1)(x,dy)P(y,A)P^{(n)}(x,A) = \int_{\mathbb{R}^d} P^{(n-1)}(x,dy)P(y,A)

denote the nn-th iteration of the transition kernel. We also want P (n)(x,A)P^{(n)}(x,A) to converge to π *\pi^* as nn \to \infty. That is, the distribution of the draws generated by the iterations is approximately π *\pi^*.

Markov chain Monte Carlo methods look at the problem from the opposite perspective. The invariant distribution is known: it is the target distribution. The problem is how to generate an appropriate transition kernel with the aforementioned convergence property. Suppose that the transition kernel can be expressed as

P(x,dy)=p(x,y)dy+r(x)δ x(dy) P(x, dy) = p(x,y)\,dy + r(x) \delta_x(dy)

where p(x,x)=0p(x,x) = 0, δ x(dy)=1\delta_x(dy) = 1 if xdyx \in dy and 00 otherwise, and define r(x)=1 dp(x,y)dyr(x) = 1 - \int_{\mathbb{R}^d} p(x,y)\, dy. The r(x)r(x) term represents the probability of staying at xx which may be nonzero. If this kernel satisfies the reversibility condition

(1)π(x)p(x,y)=π(y)p(y,x), \pi(x)p(x,y) = \pi(y)p(y,x),

then π\pi is the invariant distribution of P(x,)P(x,\cdot). The Metropolis-Hastings algorithm generates a p(x,y)p(x,y) with this property.

The Metropolis-Hastings Algorithm

Let q(x,y)q(x,y) denote the candidate-generating density, where q(x,y)dy=1\int q(x,y)\,dy = 1. This is essentially the conditional density of yy given xx. This density could potentially be used as the p(x,y)p(x,y) term in the transition kernel, but it may not satisfy (1). If, for example, we have

(2)π(x)q(x,y)>π(y)q(y,x), \pi(x)q(x,y) \gt \pi(y)q(y,x),

then we can adjust qq by using a probability of move α(x,y)\alpha(x,y). Transitions will be made using

p MH(x,y)q(x,y)α(x,y) p_{MH}(x,y) \equiv q(x,y)\alpha(x,y)

for xyx \neq y.

The choice of α\alpha follows the following logic. If (2) holds, then moves from xx to yy are happening too often under qq. We should thus choose α(y,x)=1\alpha(y,x) = 1. But then, in order to satisfy (1), we must have

π(x)q(x,y)α(x,y) =π(y)q(y,x)α(y,x) =π(y)q(y,x). \begin{split} \pi(x)q(x,y)\alpha(x,y) &= \pi(y)q(y,x)\alpha(y,x) \\ &= \pi(y)q(y,x). \end{split}

This implies that

α(x,y)=π(y)q(y,x)π(x)q(x,y). \alpha(x,y) = \frac{\pi(y)q(y,x)}{\pi(x)q(x,y)}.

Conversely, we can consider the case when the inequality in (1) were reversed to derive α(y,x)\alpha(y,x). Thus, to summarize, in order to satisfy reversibility, we set

(3)α(x,y)={min[π(y)q(y,x)π(x)q(x,y),1], ifπ(x)q(x,y)>0 1, otherwise. \alpha(x,y) = \begin{cases} \min\left[ \frac{\pi(y)q(y,x)}{\pi(x)q(x,y)}, 1 \right], &\text{if}\; \pi(x)q(x,y) \gt 0 \\ 1, &\text{otherwise.} \end{cases}

Hence, the desired transition kernel is

(4)P MH(x,dy)=q(x,y)α(x,y)dy+[1 dq(x,y)α(x,y)dy]δ x(dy). P_{MH}(x,dy) = q(x,y)\alpha(x,y)\,dy + \Bigl[ 1 - \int_{\mathbb{R}^d} q(x,y)\alpha(x,y)\,dy \Bigr] \delta_x(dy).

Thus, the Metropolis-Hastings algorithm is defined by the candidate-generating density q(x,y)q(x,y). Note that α(x,y)\alpha(x,y) does not require knowledge of the normalizing constant because it drops out of the ratio π(y)/π(x)\pi(y)/\pi(x). A special case arises when the candidate-generating density is symmetric: q(x,y)=q(y,x)q(x,y) = q(y,x) since the probability of move reduces to π(y)/π(x)\pi(y)/\pi(x). This special case forms the basis for optimization algorithms such as Simulated annealing.

The algorithm proceeds as follows: Given an initial value x (0)x^{(0)}, for each j=1,2,,Nj = 1, 2, \ldots, N

  1. Draw yy from q(x (j),)q(x^{(j)}, \cdot) and uu from U(0,1)U(0,1)
  2. If uα(x (j),y)u \leq \alpha(x^{(j)}, y), set x (j+1)=yx^{(j+1)} = y.
  3. Otherwise, set x (j+1)=x (j)x^{(j+1)} = x^{(j)}.

References

  • Chib, S. and E. Greenberg (1995): “Understanding the Metropolis Hastings Algorithm,” American Statistical Journal, 49, 327–335.

  • Hastings, W.K. (1970): “Monte Carlo Sampling Methods Using Markov Chains and Their Applications,” Biometrika, 57, 97–109.

  • Metropolis, N., A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller (1953): “Equations of State Calculations by Fast Computing Machines,” Journal of Chemical Physics, 21, 1087–1092.

  • Robert, C.P., and G. Casella (2004): Monte Carlo Statistical Methods, Second Edition. New York: Springer.

See Also

category: Statistics, Sampling