To illustrate a normal approximation to a posterior, let’s return to the fire call example described in the September 6 post. Here we had Poisson sampling with mean

1. First we write a short function lpost that computes the logarithm of the posterior. I show the expressions for the log likelihood and the log prior. On the log scale, the log posterior is the log likelihood PLUS the log prior.

lpost = function(lambda){ loglike = -6*lambda + 8*log(lambda) logprior = dnorm(lambda, 3, 1, log=TRUE) loglike + logprior }

2. I plot the normalized version of the posterior below. I first write a short function post that computes the posterior, use the integrate function to numerically integrate the density from 0 to 10, and then use the curve function to display the normalized posterior.

post = function(lambda) exp(lpost(lambda)) C = integrate(post, 0, 10) curve(exp(lpost(x))/C$value, 0, 5)

3. There is a useful function laplace in the LearnBayes package that conveniently finds the matching normal approximation to the posterior. The function inputs are (1) the function defining the log posterior, and (2) a guess at the posterior mode. Typically the log posterior might depend on data and prior parameters and that would be the last input to laplace (here we are not using that extra input).

library(LearnBayes) fit=laplace(lpost, 1) fit $mode [1] 1.7 $var [,1] [1,] 0.2653809

The important output here is (1) the mode of the posterior and (2) the corresponding approximation to the posterior variance. By looking at the output, we see that the posterior of

4. To check the accuracy of this approximation, we use the curve function to add a matching normal density. (I have also added a legend by use of the legend function.) Note that the normal approximation is pretty accurate in this particular case.

curve(dnorm(x, fit$mode, sqrt(fit$var)), add=TRUE, col="red") legend("topright", legend=c("Exact", "Normal Approximation"), lty = 1, col=c("black", "red"))