Archive

Author Archive

Cell phone story — part 2

October 3, 2011 Leave a comment

It is relatively easy to set up a Gibbs sampling algorithm for the normal sampling problem when independent priors (of the conjugate type) are assigned to the mean and precision.  Here we outline how to do this on R.

We start with an expression for the joint posterior of the mean \mu and the precision P:

(Here S is the sum of squares of the observations about the mean.)

1.  To start, we recognize the two conditional distributions.

  • The posterior of \mu given P is given by the usual updating formula for a normal mean and a normal prior.  (Essentially this formula says that the posterior precision is the sum of the prior and data precisions and the posterior mean is a weighted average of the prior mean and the sample mean where the weights are proportional to the corresponding precisions.
  • The posterior of P given \mu has a gamma form where the shape is given by a + n/2 and the scale is easy to pick up.

2.  Now we’re ready to use R.  I’ve written a short function that implements a single Gibbs sampling cycle.  To understand the code, here are the variables:

– ybar is the sample mean, S is the sum of squares about the mean, and n is the sample size
– the prior parameters are (mu0, tau) for the prior and (a, b) for the precision
– theta is the current value of (\mu, P)

The function performs the simulations from the distributions [\mu | P] and P | \mu] and returns a new value of (\mu, P)

one.cycle=function(theta){
mu = theta[1]; P = theta[2]
P1 = 1/tau0^2 + n*P
mu1 = (mu0/tau0^2 + ybar*n*P) / P1
tau1 = sqrt(1/P1)
mu = rnorm(1, mu1, tau1)

a1 = a + n/2
b1 = b + S/2 + n/2*(mu – ybar)^2
P = rgamma(1, a1, b1)
c(mu, P)
}

All there is left in the programming is some set up code (bring in the data and define the prior parameters), give a starting value, and collect the vectors of simulated draws in a matrix.

Advertisements
Categories: MCMC

Cell phone story

October 2, 2011 Leave a comment

I’m interested in learning about the pattern of text message use for my college son.  I pay the monthly cell phone bill and I want to be pretty sure that he won’t exceed his monthly allowance of 5000 messages.

We’ll put this problem in the context of a normal distribution inference problem.  Suppose y, the number of daily text messages (received and sent) is normal with mean \mu and standard deviation \sigma.  We’ll observe y_1, ..., y_{13}, the number of text messages in the first 13 days in the billing month.  I’m interested in the predictive probability that the total of the count of text message in the next 17 days exceeds 5000.

We’ll talk about this problem in three steps.

  1. First I talk about some prior beliefs about (\mu, \sigma) that we’ll model by independent conjugate priors.
  2.  I’ll discuss the use of Gibbs sampling to simulate from the posterior distribution.
  3. Last, we’ll use the output of the Gibbs sampler to get a prediction interval for the sum of text messages in the next 17 days.
Here we talk about prior beliefs.  To be honest, I didn’t think too long about my beliefs about my son’s text message usage, but here is what I have.
  1. First I assume that my prior beliefs about the mean \mu and standard deviation \sigma of the population of text messages are independent.  This seems reasonable, especially since it is easier to think about each parameter separately.
  2. I’ll use conjugate priors to model beliefs about each parameter.  I believe my son makes, on average, 40 messages per day but I could easily be off by 15.  So I let \mu \sim N(40, 15).
  3. It is harder to think about my beliefs about the standard deviation \sigma of the text message population.   After some thought, I decide that my prior mean and standard deviation of \sigma are 5 and 2, respectively.  We’ll see shortly that it is convenient to model the precision P = 1/\sigma^2 by a gamma(a, b) distribution.  It turns out that P \sim gamma(3, 60) is a reasonable match to my prior information about \sigma.
In the next blog posting, I’ll illustrate writing a R script to implement the Gibbs sampling.
Categories: MCMC

Learning from the extremes – part 3

September 29, 2011 Leave a comment

Continuing our selected data example, suppose we want to fit our Bayesian model by using a MCMC algorithm.  As described in class, the Metropolis-Hastings random walk algorithm is a convenient MCMC algorithm for sampling from this posterior density.  Let’s walk through the steps of doing this using LearnBayes.

1.  As before, we write a function minmaxpost that contains the definition of the log posterior.  (See an earlier post for this function.)

2.  To get some initial ideas about the location of (\mu, \log \sigma), we use the laplace function to get an estimate at the mean and variance-covariance matrix.

data=list(n=10, min=52, max=84)
library(LearnBayes)
fit = laplace(minmaxpost, c(70, 2), data)
mo = fit$mode
v = fit$var

Here mo is a vector with the posterior mode and v is a matrix containing the associated var-cov matrix.

Now we are ready to use the rwmetrop function that implements the M-H random walk algorithm.  There are four inputs:  (1) the function defining the log posterior, (2) a list containing var, the estimated var-cov matrix, and scale, the M-H random walk scale constant, (3) the starting value in the Markov Chain simulation, (4) the number of iterations of the algorithm, and (5) any data and prior parameters used in the log posterior density.

Here we’ll use v as our estimated var-cov matrix, use a scale value of 3, start the simulation at (\mu, \log \sigma) = (70, 2) and try 10,000 iterations.

s = rwmetrop(minmaxpost, list(var=v, scale=3), c(70, 2), 10000, data)

I display the acceptance rate — here it is 19% which is a reasonable value.

> s$accept
[1] 0.1943

Here we can display the contours of the exact posterior and overlay the simulated draws.

mycontour(minmaxpost, c(45, 95, 1.5, 4), data,
          xlab=expression(mu), ylab=expression(paste("log ",sigma)))
points(s$par)

It seems like we have been successful in getting a good sample from this posterior distribution.

Categories: MCMC

Learning from the extremes – part 2

September 23, 2011 Leave a comment

In the last post, I described a problem with selected data.  You observe speeds of  10 cards but only collect the minimum speed 52 and the maximum speed of 84.  We want to learn about the mean and standard deviation of the underlying normal distribution.

We’ll work with the parameterization (\mu, \log \sigma) which will give us a better normal approximation.  A standard noninformative prior is uniform on  (\mu, \log \sigma).

1.  First I write a short function minmaxpost that computes the logarithm of the posterior density.  The arguments to this function are \theta = (\mu, \log \sigma) and data which is a list with components n, min, and max.  I’d recommend using the R functions pnorm and dnorm in computing the density — it saves typing errors.

minmaxpost=function(theta, data){
  mu = theta[1]
  sigma = exp(theta[2])
  dnorm(data$min, mu, sigma, log=TRUE) +
    dnorm(data$max, mu, sigma, log=TRUE) +
    (data$n - 2)*log(pnorm(data$max, mu, sigma)-pnorm(data$min, mu, sigma))
}

2.  Then I use the function laplace in the LearnBayes package to summarize this posterior.  The arguments to laplace are the name of the log posterior function, an initial estimate at \theta and the data that is used in the log posterior function.

data=list(n=10, min=52, max=84)
library(LearnBayes)
fit = laplace(minmaxpost, c(70, 2), data)

3.  The output of laplace includes mode, the posterior mode, and var, the corresponding estimate at the variance-covariance matrix.

fit
$mode
[1] 67.999960  2.298369

$var
              [,1]          [,2]
[1,]  1.920690e+01 -1.900688e-06
[2,] -1.900688e-06  6.031533e-02

4.  I demonstrate below that we obtain a pretty good approximation in this situation.   I use the mycontour function to display contours of the exact posterior and overlay the matching normal approximation using a second application of mycontour.

mycontour(minmaxpost, c(45, 95, 1.5, 4), data,
          xlab=expression(mu), ylab=expression(paste("log ",sigma)))
mycontour(lbinorm, c(45, 95, 1.5, 4),
          list(m=fit$mode, v=fit$var), add=TRUE, col="red")
Categories: MCMC

Learning from the extremes

September 22, 2011 Leave a comment

Here is an interesting problem with “selected data”.  Suppose you are measuring the speeds of cars driving on an interstate.  You assume the speeds are normally distributed with mean \mu and standard deviation \sigma.  You see 10 cars pass by and you only record the minimum and maximum speeds.  What have you learned about the normal parameters?

First we’ll describe the construction of the likelihood function.  We’ll combine the likelihood with the standard noninformative prior for a mean and standard deviation.   Then we’ll illustrate the use of a normal approximation to learn about the parameters.

Here we focus on the construction of the likelihood.  Given values of the normal parameters, what is the probability of observing minimum = x and the maximum = y in a sample of size n?

Essentially we’re looking for the joint density of two order statistics which is a standard result.  Let f and F denote the density and cdf of a normal density with mean \mu and standard deviation \sigma.  Then the joint density of (x, y) is given by

f(x, y | \mu, \sigma) \propto f(x) f(y) [F(y) - F(x)]^{n-2}, x < y

After we observe data, the likelihood is this sampling density viewed as function of the parameters.  Suppose we take a sample of size 10 and we observe x = 52, y = 84.  Then the likelihood is given by

L(\mu, \sigma) \propto f(52) f(84) [F(84) - F(52)]^{8}

In the next blog posting, I’ll describe how to summarize this posterior by a normal approximation in LearnBayes.

Categories: MCMC

Normal approximation to posterior

September 22, 2011 Leave a comment

To illustrate a normal approximation to a posterior, let’s return to the fire call example described in the September 6 post.  Here we had Poisson sampling with mean \lambda and a normal prior on \lambda.

1.  First we write a short function lpost that computes the logarithm of the posterior.  I show the expressions for the log likelihood and the log prior.  On the log scale, the log posterior is the log likelihood PLUS the log prior.

lpost = function(lambda){
  loglike = -6*lambda + 8*log(lambda)
  logprior = dnorm(lambda, 3, 1, log=TRUE)
  loglike + logprior
}

2.  I plot the normalized version of the posterior below.  I first write a short function post that computes the posterior, use the integrate function to numerically integrate the density from 0 to 10, and then use the curve function to display the normalized posterior.

post = function(lambda) exp(lpost(lambda))
C = integrate(post, 0, 10)
curve(exp(lpost(x))/C$value, 0, 5)

3.  There is a useful function laplace in the LearnBayes package that conveniently finds the matching normal approximation to the posterior.  The function inputs are (1) the function defining the log posterior, and (2) a guess at the posterior mode.  Typically the log posterior might depend on data and prior parameters and that would be the last input to laplace (here we are not using that extra input).

library(LearnBayes)
fit=laplace(lpost, 1)
fit
$mode
[1] 1.7
$var
          [,1]
[1,] 0.2653809

The important output here is (1) the mode of the posterior and (2) the corresponding approximation to the posterior variance.  By looking at the output, we see that the posterior of \lambda is approximately N(1.7, \sqrt{0.265}).

4.  To check the accuracy of this approximation, we use the curve function to add a matching normal density.  (I have also added a legend by use of the legend function.)  Note that the normal approximation is pretty accurate in this particular case.

curve(dnorm(x, fit$mode, sqrt(fit$var)), add=TRUE, col="red")
legend("topright", legend=c("Exact", "Normal Approximation"),
       lty = 1, col=c("black", "red"))

 

Categories: Bayesian computation

Modeling field goal kicking – part III

September 19, 2011 Leave a comment

Now that we have formed our prior beliefs, we’ll summarize the posterior by computing the density on a fine grid of points.  The functions mycontour and simcontour in the LearnBayes package are helpful here.

After loading LearnBayes, the logarithm of the  logistic posterior is programmed in the function logisticpost.  There are two arguments to this posterior, the vector theta of regression coefficients and the data which is a matrix of the form [s n x], where s is the vector of successes, n is the vector of sample sizes, and x is the vector of covariates.  When we use the conditional means prior (described in the previous post), the prior is in that form, so we simply augment the data with the prior “data” and that is the input for logisticpost.

Here is the matrix prior that uses the parameters described in the previous post.

prior=rbind(c(30, 8.25 + 1.19, 8.25),
            c(50, 4.95 + 4.95, 4.95))

We paste the data and prior together in the matrix data.prior.

data.prior=rbind(d, prior)

After some trial and error, we find that the rectangle
(-2, 12, -0.3, 0.05) brackets the posterior.  We draw a contour plot of the posterior using the mycontour function — the arguments are the function defining the log posterior, the vector of limits (xlo, xhi, ylo, yhi) and the data.prior matrix.

mycontour(logisticpost, c(-2, 12, -0.3, 0.05), data.prior,
    xlab=expression(beta[0]), ylab=expression(beta[1]))

Now that we have bracketed the posterior, we use the simcontour function to take a simulated sample from the posterior.

S=simcontour(logisticpost, c(-2, 12, -.3, .05), data.prior, 1000)

I place the points on the contour to demonstrate that this seems to be a reasonable sample.

points(S)

Let’s illustrate doing some inference.  Suppose I’m interested in p(40), the probability of success at 40 yards.  This is easy to do using the simulated sample.  I compute values of p(40) from values of the simulated draws of (beta_0, \beta_1) and then construct a density estimate of the simulated draws.

p40 = exp(S$x + 40*S$y)/(1 + exp(S$x + 40*S$y))
plot(density(p40))

I’m pretty confident that the success rate from this distance is between 0.6 and 0.8.

 

Categories: Regression