Archive

Archive for November, 2007

Bayesian regression

November 15, 2007 Leave a comment

To introduce Bayesian regression modeling, we consider a dataset from De Veaux, Velleman and Bock which collected physical measurements from a sample of 250 males. One is interested in predicting a person’s body fat from his height, waist, and chest measurements.

The file Body_fat.txt contains the data that we read in R.

data=read.table(“Body_fat.txt”,sep=”\t”,header=TRUE)
names(data)
[1] “Pct.BF” “Height” “Waist” “Chest”
attach(data)

Suppose we wish to fit the regression model

Pct.BF ~ Height + Waist + Chest

The standard least-squares fit is done using the lm command in R.

fit=lm(Pct.BF~Height+Waist+Chest)

Here is a portion of the summary of the fit.

summary(fit)

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.06539 7.80232 0.265 0.79145
Height -0.56083 0.10940 -5.126 5.98e-07 ***
Waist 2.19976 0.16755 13.129 < 2e-16 ***
Chest -0.23376 0.08324 -2.808 0.00538 **

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.399 on 246 degrees of freedom
Multiple R-Squared: 0.7221, Adjusted R-squared: 0.7187
F-statistic: 213.1 on 3 and 246 DF, p-value: < 2.2e-16

Now let’s consider a Bayesian fit of this model. Suppose we place the usual noninformative prior on the regression vector beta and the error variance sigma^2.

g(beta, sigma^2) = 1/sigma^2.

Then there is a simple direct method (outlined in BCUR) of simulating from the posterior distribution of (beta, sigma^2).

1. We first create the design matrix X:

X=cbind(1,Height,Waist,Chest)

The response is contained in the vector Pct.BF.

2. To simulate 5000 draws from the posterior of (beta, sigma), we use the function blinreg in the LearnBayes package.

fit=blinreg(Pct.BF, X, 5000)

The output fit is a list with two components: beta is a matrix of simulated draws of beta where each column corresponds to a sample from component of beta, and sigma is a vector of draws from the marginal posterior of sigma.

We can summarize the simulated draws of beta by computing posterior means and standard deviations.

apply(fit$beta,2,mean)
X XHeight XWaist XChest
1.8748122 -0.5579671 2.2031474 -0.2350661

apply(fit$beta,2,sd)
X XHeight XWaist XChest
7.84069390 0.11050071 0.16839919 0.08286758

Here is a graph of density estimates of simulated draws from the four regression parameters.

par(mfrow=c(2,2))
for (j in 1:4) plot(density(fit$beta[,j]),main=paste(“BETA “,j),
lwd=3, col=”red”, xlab=”PAR”)

What’s so special about Bayesian regression if we are essentially replicating the frequentist regression fit?

We’ll talk about the advantages of Bayesian regression in the next blog posting.

Advertisements
Categories: Regression

Looking for True Streakiness

November 11, 2007 Leave a comment

There is a lot of interest in streaky behavior in sports. One observes players or teams that appear streaky with the implicit conclusion that this says something about the character of the athlete.

Eric Byrnes had 412 opportunities to hit during the 2005 baseball season. Here is his sequence of hits (successes) and outs (failures) during the season.

[1] 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0 0 0 1 0
[38] 0 1 1 1 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 0 0 0 0 1 0
[75] 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0
[112] 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0
[149] 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0
[186] 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 1 0 1 0 0 1 0 0 0 0
[223] 0 0 0 0 0 0 0 0 0 1 0 1 1 0 1 0 0 1 1 0 0 0 1 1 0 1 1 0 1 1 0 1 1 0 1 0 0
[260] 0 0 0 0 1 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0
[297] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0
[334] 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
[371] 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 1 0 1 1 1 0 0
[408] 0 0 0 0 0

One way of seeing the streaky behavior in this sequence is by a moving average graph where one plots the success rate (batting average) for windows of 40 at-bats. I wrote a short program mavg.R to compute the moving averages. The following R code plots the moving averages and plots a lowess smooth on top to help see the pattern.

MAVG=mavg(byrne$x,k=40)
plot(MAVG,type=”l”,lwd=2,col=”red”,xlab=”GAME”,ylab=”AVG”,
main=”ERIC BYRNES”)

We some interesting patterns.. It seemed that Byrnes had a cold spell in the first part of the season, followed by a hot period, and then a very cold period.

The interesting question is: is this streaky pattern “real” or is it just a byproduct of bernoulli chance variation?

We answer this question by means of a Bayes factor. Suppose we partition Byrnes’ 412 at-bats into groups of 20 at-bats. We observe counts y1, …, yn, where yi is the number of hits in the ith group. Suppose yi is binomial(20, pi) where pi is the probability of a hit in the ith period.

We define two hypotheses:

H (not streaky) the probabilities across periods are equal, p1 = … = pn = p

A (streaky) the probabilities across periods vary according to a beta distribution with mean eta and precision K. This model is indexed by the parameter K.

The functions bfexch and laplace in the LearnBayes package can be used to compute a Bayes factor in support of A over H. Here is how we do it.

1. The raw data is in the matrix BRYNE — the first column contains the data (0’s and 1’s) and the second column contains the attempts (column of 1’s). We regroup the data into periods of 20 at-bats using the regroup function.

regroup(BRYNE, 20)

2. The following R function laplace.int will compute the log, base10 of the Bayes factor in support of streakiness for a fixed value of log(K).

laplace.int=function(logK,data=data1)
log10(exp(laplace(bfexch,0,list(data=data,K=exp(logK)))$int))

To illustrate, suppose we want to compute the log10 Bayes factor for our data for logK = 3:

> laplace.int(3,regroup(BRYNE,20))
[,1]
[1,] 1.386111

This indicates support for streakiness — the log Bayes factor is 1.38 which means that A is over 10 times more likely than H.

3. Generally we’d like to compute the log10 Bayes factor for a sequence of values of log K. I first write a simple function that does this:

s.laplace.int=function(logK,data)
list(x=logK,y=sapply(logK,laplace.int,data))

and then I use this function to compute the Bayes factor for values of log K from 2 to 6 in steps of 0.2. I use the plot command to graph these values. I draw a line at the value log10 BF = 0 — this corresponds to the case where neither model is supported.

plot(s.laplace.int(seq(2,6,by=.2),regroup(BRYNE,20)),type=”l”,
xlab=”LOG K”, ylab=”LOG 10 BAYES FACTOR”, lwd=3, col=”red”, ylim=c(-3,2))
lines(c(1,7),c(0,0),lwd=3,col=”blue”)
title(main=”ERIC BYRNES”)

What we see that, for a range of values of K, the Bayes factor favors the model A by a factor of 10 or more.

Actually we only looked at Eric Byrnes since he exhibited unusually streaky behavior during this 2005 season. What if we look at other players? Here are the Bayes factors graphs for the hitting data for two other players Chase Utley and Damian Miller (we are grouping the data in the same way).


Here for both players, note that the log10 Bayes factors are entirely negative for the range of K values. For both players, there is support for the non-streaky model H. One distinctive features of Bayes factors is that they can provide support for the null or the alternative hypothesis.

Test of Independence in a 2 x 2 Table

November 8, 2007 Leave a comment

Consider data collected in a study described in Dorn (1954) to assess the relationship between smoking and lung cancer. In this study, a sample of 86 lung-cancer patients and a sample of 86 controls were questioned about their smoking habits. The two groups were chosen to represent random samples from a sub-population of lung-cancer patients and an otherwise similar population of cancer-free individuals.

Here is the 2 x 2 table of responses:

Cancer Control
Smokers 83 72
Non-smokers 3 14

Let pL and pC denote the proportions of lung-cancer patients and controls who smoke. We wish to test H: pL = pC against the alternative A: pL pC.

To construct a Bayesian test, we define a suitable model for H and for A, and then compute the Bayes factor in support of the alternative A.

1. To describe these models, first we transform the proportions to the logits

LogitL = log(pL/(1-pL)), Logit C = log(pC/(1-pC))

2. We then define two parameters theta1, theta2, that are equal to the difference and sum of the logits.

theta1 = LogitL – LogitC, theta2 = LogitL + LogitC.

theta1 is the log odds ratio, a popular measure of association in a 2 x 2 table. Under the hypothesis of independence H, theta1 = 0.

3. Consider the following prior on theta1 and theta2. We assume they are independent where

theta1 is N(0, tau1), theta2 is N(0, tau2).

4. Under H (independence), we assume theta1 = 0, so we set tau1 = 0. theta2 is a nuisance parameter that we arbitrarily be N(0, 1). (The Bayes factor will be insensitive to this choice.)

5. Under A (not independence), we assume theta1 is N(0, tau1), where tau1 reflects our beliefs about the location of theta1 when the proportions are different. We also assume again that theta2 is N(0, 1). (This means that our beliefs about theta2 are insensitive to our beliefs about theta1.)

To compute the marginal densities, we write a function that computes the logarithm of the posterior when (theta1, theta2) have the above prior.

logctable.test=function (theta, datapar)
{
theta1 = theta[1] # log odds ratio
theta2 = theta[2] # log odds product

s1 = datapar$data[1,1]
f1 = datapar$data[1,2]
s2 = datapar$data[2,1]
f2 = datapar$data[2,2]

logitp1 = (theta1 + theta2)/2
logitp2 = (theta2 – theta1)/2
loglike = s1 * logitp1 – (s1 + f1) * log(1 + exp(logitp1))+
s2 * logitp2 – (s2 + f2) * log(1 + exp(logitp2))
logprior = dnorm(theta1,mean=0,sd=datapar$tau1,log=TRUE)+
dnorm(theta2,mean=0,sd=datapar$tau2,log=TRUE)

return(loglike+logprior)
}

We enter the data as a 2 x 2 matrix:

data=matrix(c(83,3,72,14),c(2,2))
data
[,1] [,2]
[1,] 83 72
[2,] 3 14

The argument datapar in the function is a list consisting of data, the 2 x 2 data table, and the values of tau1 and tau2.

Suppose we assume theta1 is N(0, .8) under the alternative hypothesis. This prior is shown in the below figure.

By using the laplace function, we compute the log marginal density under both models. (For H, we are approximating the point mass of theta1 on 1 by a normal density with a tiny standard deviation tau1.)

l.marg0=laplace(logctable.test,c(0,0),list(data=data,tau1=.0001,tau2=1))$int
l.marg1=laplace(logctable.test,c(0,0),list(data=data,tau1=0.8,tau2=1))$int

We compute the Bayes factor in support of the hypothesis A.

BF.10=exp(l.marg1-l.marg0)
BF.10
[1] 7.001088

The conclusion is that the alternative hypothesis A is seven times more plausible than the null hypothesis H.

Simple Illustration of Bayes Factors

November 7, 2007 Leave a comment

Suppose we collect the number of accidents in a year for 30 Belgium drivers. We assume that y1,…, y30 are independent Poisson(lambda), where lambda is the average number of accidents for all Belgium drivers.

Consider the following four priors for lambda:

PRIOR 1: lambda is gamma with shape 3.8 and rate 8.1. This prior reflects the belief that the quartiles of lambda are 0.29 and 0.60.

PRIOR 2: lambda is gamma with shape 3.8 and rate 4. The mean of this prior is 0.95 so this prior reflects one’s belief that lambda is close to 1.

PRIOR 3: lambda is gamma with shape 0.38 and 0.81. This prior has the same mean as PRIOR 1, but it is much more diffuse, reflecting weaker information about lambda.

PRIOR 4: log lambda is normal with mean -0.87 and standard deviation 0.60. On the surface, this looks different from the previous priors, but this prior also matches the belief that the quartiles of lambda are 0.29 and 0.60.

Suppose we observe some data — for the 30 drivers, 22 had no accidents, 7 had exactly one accident, and 1 had two accidents. The likelihood is given by

LIKE = exp(-30 lambda) lambda^9

In the below graphs, we display the likelihood in blue and show the four priors in red. Here’s the R code to produce one of the graphs. We simulate draws from the likelihood and the prior and display density estimates.

like=rgamma(10000,shape=10,rate=30)
p1=rgamma(10000,shape=3.8,rate=8.1)
plot(density(p1),xlim=c(0,3),ylim=c(0,4),
main=”PRIOR 1″,xlab=”LAMBDA”,lwd=3,col=”red”,col.main=”red”)
lines(density(like),lwd=3,col=”blue”)
text(1.2,3,”LIKELIHOOD”,col=”blue”)

Note that Priors 1 and 4 are pretty consistent with the likelihood. There is some conflict between Prior 2 and the likelihood and Prior 3 is pretty flat relative to the likelihood.

We can compare the four models by use of Bayes factors. We first compute a function that computes the log posterior for each prior. There already is a function logpoissgamma in the LearnBayes package that computes the posterior of log lambda with Poisson sampling and a gamma prior. (This can be used for priors 1, 2, and 3.) The function logpoissnormal can be used for Poisson sampling and a normal prior (prior 4). Then we use the function laplace to approximate the value of the log predictive density.

For example, here’s the code to compute the log marginal density for prior 1.

datapar=list(data=d,par=c(3.8,8.1))
laplace(logpoissgamma,.5,datapar)$int
0.4952788

So log m(y) for prior 1 is about 0.5.

We do this for each prior and get the following values:

model log m(y)
—————–
PRIOR 1 0.495
PRIOR 2 -0.729
PRIOR 3 – 0.454
PRIOR 4 0.558

We can use this output to compute Bayes factors. For example, the Bayes factor in support of PRIOR 1 over PRIOR 2 is

BF_12 = exp(0.495 – (-0.729)) = 3.4

This means that the model with PRIOR 1 is about three and a half times as likely as the model with PRIOR 2. This is not surprising, seeing the conflict between the likelihood and the Bayes factor in the graph.

Conflict between Bayesian and Frequentist Measures of Evidence

November 4, 2007 Leave a comment

Here’s a simple illustration of the conflict between a p-value and a Bayesian measure of evidence.

Suppose you take a sample y1,…, yn from a normal population with mean mu and known standard deviation sigma. You wish to test

H: mu = mu0 A: mu not equal to mu0

The usual test is based on the statistic Z = sqrt(n)*(ybar – mu0)/sigma. One computes the p-value

p-value = 2 x P(Z >= z0)

and rejects H if the p-value is small. Suppose mu0 = 0, sigma = 1, and one takes a sample of size n = 4 and observe ybar = 0.98. Then one computes

Z = sqrt(4)*0.98 = 1.96

and the p-value is

p-value = 2 * P(Z > = 1.96) = 0.05.

Consider the following Bayes test of H against A. A Bayesian model is a specification of the sampling density and the prior density. One model M0 says that the mean mu = mu0. To complete the second model M1, we place a normal prior with mean mu0 and standard deviation tau on mu. The Bayes factor in support of the M0 over the model M1 is given by the ratio of predictive densities

BF = m(y | M0)/m(y|M1)

and the posterior probability of M0 is given by

P(M0| y) = p0 BF/(p0 BF + p1),

where p0 is the prior probability of M0.

The function mnormt.twosided in the LearnBayes package does this calculation. To use this function, we specify (1) the value m0 to be tested, (2) the prior probability of H, (3) the value of tau (the spread of the prior under A), and (4) the data vector that is (ybar, n, sigma).

Here we specify the inputs:

mu0=0; prob=.5; tau=0.5
ybar = 0.98; n = 4; sigma=1
data=c(ybar,n,sigma)

Then we can use mnormt.twosided — the outputs are the Bayes factor and the posterior probability of H:

mnormt.twosided(mu0,prob,tau,data)
$bf
[1] 0.5412758

$post
[1] 0.3511868

We see that the posterior probability of H0 is 0.35 which is substantially higher than the p-value of 0.05.

In this calculation, we assumed that tau = 0.5 — this reflect our belief about the spread of mu about mu0 under the alternative hypothesis. What if we chose a different value for tau?

We investigate the sensitivity of this posterior probability calculation with respect to tau.

We write a function that computes the posterior probability for a given value of tau.

post.prob=function(tau)
{
data=c(.98,4,1); mu0=0; prob0=.5
mnormt.twosided(mu0,prob,tau,data)$post
}

Then we use the curve function to plot this function for values of tau between 0.01 to 4.

curve(post.prob,from=.01,to=4,xlab=”TAU”,ylab=”PROB(H0)”,lwd=3,col=”red”)

In the figure below, it looks like the probability of H exceeds 0.32 for all tau.