The Metropolis-Hastings Algorithm

The Metropolis-Hastings algorithm, developed by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953) and generalized by Hastings (1970), is a Markov chain Monte Carlo method which allows for sampling from a distribution when traditional sampling methods such as transformation or inversion fail. It requires only being able to evaluate the density function. The normalizing constant need not be known. This algorithm is very general and gives rise to the Gibbs sampler as a special case.

Markov Chain Monte Carlo Simulation

Consider a Markov transition kernel P(x,A) for x d and A, where is the Borel σ-algebra on d. Two major concerns in Markov chain theory are whether there exists an invariant distribution andf whether iterating the transition kernel converges to the invariant distribution. The invariant distribution π 0 satisfies π 0(dy)= dP(x,dy)π(x)dx, where π is the density of π 0 with respect to Lebesgue measure. Let P (n)(x,A)= dP (n1)(x,dy)P(y,A) denote the n-th iteration of the transition kernel. We also want P (n)(x,A) to converge to π 0 as n. That is, the distribution of the draws generated by the iterations is approximately π 0.

Markov chain Monte Carlo methods look at the problem from the opposite perspective. The invariant distribution is known: it is the target distribution. The problem is how to generate an appropriate transition kernel with the aforementioned convergence property. Suppose that the transition kernel can be expressed as P(x,dy)=p(x,y)dy+r(x)δ x(dy) where p(x,x)=0, δ x(dy)=1 if xdy and 0 otherwise, and define r(x)=1 dp(x,y)dy. The r(x) term represents the probability of staying at x which may be nonzero. If this kernel satisfies the reversibility condition labelreversibilityπ(x)p(x,y)=π(y)p(y,x), then π is the invariant distribution of P(x,). The Metropolis-Hastings algorithm generates a p(x,y) with this property.

The Metropolis-Hastings Algorithm

Let q(x,y) denote the candidate-generating density, where q(x,y)dy=1. This is essentially the conditional density of y given x. This density could potentially be used as the p(x,y) term in the transition kernel, but it may not satisfy (eq:reversibility). If, for example, we have labelinequalityπ(x)q(x,y)>π(y)q(y,x), then we can adjust q by using a probability of move α(x,y). Transitions will be made using p MH(x,y)q(x,y)α(x,y) for xy.

The choice of α follows the following logic. If (eq:inequality) holds, then moves from x to y are happening too often under q. We should thus choose α(y,x)=1. But then, in order to satisfy (eq:reversibility), we must have π(x)q(x,y)α(x,y) =π(y)q(y,x)α(y,x) =π(y)q(y,x). This implies that α(x,y)=π(y)q(y,x)π(x)q(x,y). Conversely, we can consider the case when the inequality in (eq:reversibility) were reversed to derive α(y,x).
Thus, to summarize, in order to satisfy reversibility, we set labelalphaα(x,y)={min[π(y)q(y,x)π(x)q(x,y),1], ifπ(x)q(x,y)>0 1, otherwise. Hence, the desired transition kernel is labelkernelP MH(x,dy)=q(x,y)α(x,y)dy+[1 dq(x,y)α(x,y)dy]δ x(dy).

Thus, the Metropolis-Hastings algorithm is defined by the candidate-generating density q(x,y). Note that α(x,y) does not require knowledge of the normalizing constant because it drops out of the ratio π(y)/π(x). A special case arises when the candidate-generating density is symmetric: q(x,y)=q(y,x) since the probability of move reduces to π(y)/π(x). This special case forms the basis for optimization algorithms such as Simulated annealing.

The algorithm proceeds as follows: Given an initial value x (0), for each j=1,2,,N

  1. Draw y from q(x (j),) and u from U(0,1)
  2. If uα(x (j),y), set x (j+1)=y.
  3. Otherwise, set x (j+1)=x (j).

References

See Also