Home > MCMC > Learning about an exponential rate and location

Learning about an exponential rate and location

To illustrate MCMC fitting for a two parameter problem, suppose one observes a sample y_1, ..., y_n from a gamma density with location \mu and rate \beta:

f(y) = \beta exp(-\beta(y - \mu)), y \ge \mu .

If we assume a uniform prior on (\mu, \beta), then the posterior density is given by

g(\mu, \beta | y) \propto \prod f(y_i),

where -\infty < \mu < \min y_i, \beta > 0.

Suppose we observe the following data:

[1]  5.1  5.3  7.1  6.5  3.2  5.2  6.4  9.2 11.4  3.1  3.0  9.4  3.6  5.7  5.0
[16]  3.6  3.1 11.5  3.5  5.5

5.1  5.3  7.1  6.5  3.2  5.2  6.4  9.2 11.4  3.1  3.0  9.4  3.6  5.7  5.0
3.6  3.1 11.5  3.5  5.5

1.  First we write a function logpost that computes the log posterior of (\mu, \beta).  Since \beta > 0, we have to be careful when negative values are input for \beta.  In the ifelse function, we assign a very large negative value when \beta < 0.

logpost=function(theta,y)
ifelse(theta[2]>0,sum(dexp(y-theta[1],theta[2],log=TRUE)),-9e200)

2.  Using the mycontour function in the LearnBayes package, we construct a contour plot of the posterior — we add the line \mu = \min y to show the boundary for \mu.

mycontour(logpost,c(1.5,3.5,.1,.8),y,xlab=”MU”,ylab=”BETA”,col=”red”)
abline(v=min(y))

cplot1

3.   We’ll try using a random walk Metropolis algorithm where a candidate value is found by
(\mu, \beta)^c = (\mu, \beta)^i + scale \times Z,
where Z is normal with mean 0 and variance-covariance matrix V.  Here it is not quite obvious how to choose scale and V — a normal approximation won’t work here.  Looking at the graph, it seems that most of the probability for \mu is in (2, 3) and most of the probability for \beta is in (0.1, 0.6).  If we use the rough rule of thumb “4 sd = range of data”, I get estimates for the two standard deviations and these can be squared to get estimates at the variances.  I obtain the estimate V = diag(.06, .015).  After some trial and error, it seems that the value scale = 1 works ok — the acceptance rate is 25%.
Here are the arguments for the function rwmetrop — I show the simulated draws on top of the scatterplot.  It looks like a reasonable sample from the posterior.
fit=rwmetrop(logpost,list(var=diag(c(.06,.015)),scale=1),c(2,.3),10000,y)
cplot2
Advertisements
Categories: MCMC
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: