Bayesian Thinking

Simple Example of Bayesian Learning using LearnBayes

Advertisements

Today I am giving a short course on Bayesian Computation at my stats meeting and I thought it might be helpful to illustrate some basic Bayesian computing if my participants are struggling with the more complicated examples.  This will illustrate some new methods in the “development” version of the LearnBayes package.

Okay, suppose we are interested in estimating a binomial proportion p.  My prior for p is beta(3, 10) and I take a sample of size 10, observing 7 successes and 3 failures.

Defining the log posterior

I write a short R function that defines the log posterior.

 
myposterior <- function(p, stuff){
  dbinom(stuff$y, size=stuff$n, prob=p, log=TRUE) + dbeta(p, stuff$a, stuff$b)
}

I define the inputs (number of successes, sample size, beta shape 1, beta shape 2) by a list d.

 
d <- list(y=7, n=10, a=3, b=10)

I make sure my log posterior function works:

 
myposterior(.5, d)
[1] -1.821714

The R package LearnBayes contains several functions for working with my log posterior — I illustrate some basic ones here.

The function laplace will find the posterior mode and associated variance. The value .4 is a guess at the posterior mode (a starting value for the search).

 
library(LearnBayes)
fit <- laplace(myposterior, .4, d)

In LearnBayes 2.17, I have several methods that will summarize, plot, and simulate the fit.

Summarize the (approximate) posterior:

 
summary(fit)
Var : Mean = 0.696 SD = 0.153

This tells me the posterior for p is approximately normal with mean 0.696 and standard deviation 0.153.

Plot the (approximate) posterior:

 
plot(fit)


This displays the normal approximation to the posterior.

Simulate from the (approximate) posterior:

 
S <- simulate(fit)
hist(S$sample)


These simulated draws are helpful for performing many types of inferences about p — for example, it would be easy to use these simulated draws to learn about the odds ratio p / (1 – p).

Advertisements