## Learning from the extremes – part 3

Continuing our selected data example, suppose we want to fit our Bayesian model by using a MCMC algorithm. As described in class, the Metropolis-Hastings random walk algorithm is a convenient MCMC algorithm for sampling from this posterior density. Let’s walk through the steps of doing this using LearnBayes.

1. As before, we write a function minmaxpost that contains the definition of the log posterior. (See an earlier post for this function.)

2. To get some initial ideas about the location of (), we use the laplace function to get an estimate at the mean and variance-covariance matrix.

data=list(n=10, min=52, max=84) library(LearnBayes) fit = laplace(minmaxpost, c(70, 2), data) mo = fit$mode v = fit$var

Here mo is a vector with the posterior mode and v is a matrix containing the associated var-cov matrix.

Now we are ready to use the rwmetrop function that implements the M-H random walk algorithm. There are four inputs: (1) the function defining the log posterior, (2) a list containing var, the estimated var-cov matrix, and scale, the M-H random walk scale constant, (3) the starting value in the Markov Chain simulation, (4) the number of iterations of the algorithm, and (5) any data and prior parameters used in the log posterior density.

Here we’ll use v as our estimated var-cov matrix, use a scale value of 3, start the simulation at and try 10,000 iterations.

s = rwmetrop(minmaxpost, list(var=v, scale=3), c(70, 2), 10000, data)

I display the acceptance rate — here it is 19% which is a reasonable value.

> s$accept [1] 0.1943

Here we can display the contours of the exact posterior and overlay the simulated draws.

mycontour(minmaxpost, c(45, 95, 1.5, 4), data, xlab=expression(mu), ylab=expression(paste("log ",sigma))) points(s$par)

It seems like we have been successful in getting a good sample from this posterior distribution.