Home > Single parameter > Brute force Bayes for one parameter

## Brute force Bayes for one parameter

Although we talk a lot about conjugate analyses, one doesn’t need to restrict oneself to the use of conjugate priors.  Here we illustrate learning about a Poisson mean using a normal prior on the mean $\lambda$.

Suppose I observe the following number of fire calls for a particular community each week:  0, 0, 1, 3, 2, 2.  If we assume $y_i$, the number of fire calls in week $i$, is Poisson($\lambda$), then the likelihood function is given by

$L(\lambda) = \exp(- 6 \lambda) \lambda^8, \, \, \lambda>0$

Suppose I represent my prior beliefs about $\lambda$ by a normal curve with mean 3 and standard deviation 1.  Then the posterior is given by (up to an unknown proportionality constant) by

$g(\lambda | data) = L(\lambda) \times \exp(-(\lambda - 3)^2)$

Here is a simple brute-force method of summarizing this posterior.

1.  Choose a grid of $\lambda$ values that covers the region where the posterior is concentrated (this might take some trial and error).

2.  On this grid, compute the prior, likelihood, and posterior.

3.  Using the R sample function, take a large sample from the grid where the probabilities of the points are proportional to the like x prior values.

4.  Summarize this posterior simulated sample to learn about the location of the posterior.

Here’s some R code for this example.  I use the plot function to make sure the grid does cover the posterior.  The vector L contains 1000 draws from the posterior.

lambda = seq(0, 5, by=0.1)
like = exp(-6*lambda)*lambda^8
prior = dnorm(lambda, 3, 1)
post = like * prior
plot(lambda, post)
L = sample(lambda, size=1000, prob=post, replace=TRUE)
plot(density(L))  # density graph of simulated draws