Archive

Archive for October, 2009

Learning about a correlation — part III

October 8, 2009 Leave a comment

In Chapter 5, we discuss the SIR method of simulating from a posterior density g(\theta).  Like rejection sampling, we find a proposal density p(\theta) that is easy to simulate from and covers the posterior density of interest.  Here’s the algorithm:

1.  We simulate a sample \{\theta_j\}  from the proposal density p(\theta).

2.  We compute weights \{w(\theta_j) = g(\theta_j)/p(\theta_j)\}.

3.  We resample from the \{\theta_j\} with replacement with weights proportional to \{w(\theta_j)\}.

The resampled values are approximately from the distribution of interest g(\theta_j).

The function sir in the LearnBayes package implements this algorithm when the proposal density is a t density with arbitrary mean, variance and degrees of freedom.

For this example, we showed that a Normal(0.50, 0.009) density was a reasonable approximation to the posterior.  So we use a t proposal density with location 0.50, variance 0.009 and 4 degrees of freedom.  We decide to simulate 10,000 values.

The sir function’s arguments are similar to those for rejectsampling — function that defines the log posterior, parameters of the t proposal, number of simulated draws, and the data used in the log posterior function.

R=sir(cor.sampling,list(m=.5,var=.009,df=4),10000,dz)
plot(density(R),col=”blue”,lwd=3)

R=sir(cor.sampling,list(m=.5,var=.009,df=4),10000,dz)

The output is a vector of 10,000 values.  We construct a density plot of the draws and it resembles the display in the previous posting.

plot(density(R),col=”blue”,lwd=3)

sir

Advertisements
Categories: Bayesian computation

Learning about a correlation – part II

October 6, 2009 Leave a comment

Earlier we considered the problem of learning about a correlation and illustrated a normal approximation.  Here we illustrate rejection sampling to obtain a simulated sample from the exact posterior.

The main task is to find a suitable proposal density p.  We’ll let p be a t density with mean \mu, variance \sigma^2 and small degrees of freedom \nu.

How to choose the parameters?  First, since the posterior is approximately N(0.50, 0.009), we’ll let \mu = 0.50 and \sigma^2 = 0.009.  Also, we’ll choose the degrees of freedom \nu = 4.  (It is a good strategy to have a proposal density with “flat” tails which we achieve by choosing a small number of degrees of freedom.)

Next, we have to find the constant c so that g(\rho | y) \le c p(\rho) for all \rho.  On the log scale, we want to find c such that

\log g(\rho|y) \le \log c + \log p(\rho).

I compute a small function that computes \log g(\rho | y) - \log p(\rho):

dfunction=function(rho,data)
cor.sampling(rho,data)-dmt(rho,.5,.009,4,log=TRUE)

dfunction=function(rho,data)
cor.sampling(rho,data)-dmt(rho,.5,.009,4,log=TRUE)

I plot this function over the range of values of \rho from -1 to 1 — I find that

log c = -42.9927.

To see if I have found a suitable covering function, I plot the exact posterior g(\rho | y) and the covering density c p(\rho).  It looks pretty good.

covering

Now we can implement rejection sampling.   We can use a LearnBayes function rejectsampling.

R=rejectsampling(cor.sampling,list(m=.5,var=.009,df=4), dmax=-42.9927,10000,dz)

The inputs are (1) the exact log posterior function, (2) the parameters of the t proposal density (mean, variance, df), (3) the value of log c, (4) the number of values we propose, and (5) the data used in the log posterior.

Here is a density estimate of the simulated draws we get — this is very close to the exact posterior density.

plot(density(R),col=”orange”,lwd=3)

density
Categories: Bayesian computation

Normal approximation to posteriors

October 6, 2009 Leave a comment

Today we talked about normal approximations to posterior distributions.  As an example, I observe the daily hits on my book’s website during each of the days one recent month.  I collect the following counts for the weekdays (Monday through Friday)

20,30,22,20,20,17,21,26,22,30,36,15,30,27,22,23,18,
24,28,23,12

20,30,22,20,20,17,21,26,22,30,36,15,30,27,22,23,18,24,28,23,12

and I collect the following counts for the weekends (Saturday and Sunday)

7,12,11,12,12,17,18,20,17

We assume that the counts y_1, ..., y_{30} are independent where y_i is Poisson(\lambda_i), where the \lambda_i satisfy the log linear model

\log \lambda_i = \beta_0 + \beta_1 WEEKEND_i,

where WEEKEND_i is 0 (1) if the day is on a weekday (weekend).  I place a uniform prior on (\beta_0, \beta_1).

I program the log posterior in the function loglinearpost — the data matrix has two columns, where the first column is the values of WEEKEND and the second column has the counts.

loglinearpost=function (beta, data)
{
x = data[, 1]
y = data[, 2]
logf = function(x,  y, beta) {
lp = beta[1] + beta[2] * x
dpois(y,exp(lp),log=TRUE)}
sum(logf(x,y, beta))
}

I put the counts and the weekend indicators in the vectors y and x and form the data matrix.
y=c(7,12,11,12,12,17,18,20,1720,30,22,20,20,17,21,26,22,30,36,15,30,27,22,23,18,
24,28,23,12)
x=c(rep(0,9),rep(1,21))
data=cbind(x,y)
I obtain the normal approximation using the laplace function.  The output is the posterior mode and the corresponding variance-covariance matrix.
fit=laplace(loglinearpost,c(0,0),data)
fit
$mode
[1] 2.6390946 0.5025815
$var
[,1]         [,2]
[1,]  0.00793621 -0.007936210
[2,] -0.00793621  0.009993843
Below using the mycontour function, I show the exact posterior in RED and the approximating normal density in BLUE.  Note that I define a new function mydnorm that computes the log of the multivariate normal density (defined in dmnorm).   Note that the two sets of contours are very close, indicating that the normal approximation is very accurate in this case.
mycontour(loglinearpost,c(2.2,3,.1,.9),data,col=”red”,
xlab=”BETA0″,ylab=”BETA1″)
mydnorm=function(y,d)
dmnorm(y,d$mu,d$sigma,log=TRUE)
mycontour(mydnorm,c(2.2,3,.1,.9),
list(mu=fit$mode,sigma=fit$var),col=”blue”,add=TRUE)
contour
Categories: Bayesian computation

Learning about a correlation

October 2, 2009 Leave a comment

We’re starting the chapter on Bayesian computation and I thought I’d use a new example to illustrate the basic methods.  I collect the high school GPA and the ACT math score for 50 BGSU freshmen and I’m interested in learning about the correlation.  If I standardize both measurements (by subtracting the mean and dividing by the standard deviation), then I can assume the standardized pairs (x_1, y_1), ..., (x_{50}, y_{50}) are a random sample from a bivariate normal density with 0 means, unit standard deviations, and correlation \rho.  If I assume a uniform prior on \rho, then the posterior density is given by

g(\rho) = \prod \frac{1}{\sqrt{1-\rho^2}}\exp\left(-\frac{1}{2(1-\rho^2)}(x_i^2 - 2\rho x _iy_i + y_i^2)\right)

I begin by writing a short function to compute the log posterior of \rho.  Here data is a matrix of two columns of the standardized x and standardized y.

cor.sampling=function(rho,data)
{
x=data[,1]
y=data[,2]
sum(-0.5*log(1-rho^2)-0.5/(1-rho^2)*(x^2-2*rho*x*y+y^2))
}

cor.sampling=function(rho,data)
{
x=data[,1]
y=data[,2]
sum(-0.5*log(1-rho^2)-0.5/(1-rho^2)*(x^2-2*rho*x*y+y^2))
}

I display the posterior density below.  Note that I use the sapply function to evaluate the posterior for a vector of values.  Also, I do some standardization of the posterior values so that the total area of the plotted density is equal to one.
r=seq(-.99,.99,by=.01)
gr=sapply(r,cor.sampling,dz)
gr=exp(gr-max(gr))/sum(exp(gr-max(gr)))/0.01
plot(r,gr,type=”l”,xlim=c(0,1),col=”red”,lwd=2)
cor1
We illustrate finding a normal approximation to this density by use of the laplace function in the LearnBayes package.  The arguments are the name of the log posterior, the starting guess at the mode (we use 0) and the matrix of data dz.  The output is the posterior mode and the corresponding estimate at the posterior variance.
fit=laplace(cor.sampling,0,dz)
fit
$mode
[1] 0.5015625
$var
[1,] 0.00912294
So the posterior for \rho is approximately N(0.50, \sqrt{0.0091}).  Using the curve function we add the normal approximation to the graph.  This approximation looks pretty good, but there is a little room for improvement.
curve(dnorm(x,fit$mode,sqrt(fit$var)),add=TRUE,col=”blue”,lwd=2)
legend(“topleft”,c(“Exact”,”Approximation”),col=c(“red”,”blue”),lty=c(1,1),lwd=c(2,2))
cor2
Categories: Bayesian computation