Bayesian Thinking

A “robustness” illustration of Bayes factors

Advertisements

Here is another illustration of Bayes factors.   We are worried about the possibility of outliers and so we assume the observations are distributed from a t distribution with location , scale , and degrees of freedom .  We have some prior beliefs about the location and scale:  we assume independent with and .

We observe the following data:

y=c(114.1,113.1,103.5,89.4,94.7,110.6,103.2,123.0,112.3,110.976.8,
112.6,86.0,99.1,106.3,101.5,80.5,99.6,108.6,95.9)
We define a model that defines the degrees of freedom parameter to be a specific value.
We can use Bayes factors to decide on appropriate values of .
I define a function tlikepost defining the log posterior of .  Here the argument stuff is a list with two components, the data vector y and the degrees of freedom value df.  Here the sampling density of is given by
tlikepost=function(theta,stuff)
{
mu=theta[1]
sigma=exp(theta[2])
y=stuff$y
df=stuff$df
loglike=sum(dt((y-mu)/sigma, df, log=TRUE)-log(sigma))
logprior=dnorm(mu,100,10,log=TRUE)+dnorm(log(sigma),0,2, log=TRUE)
loglike+logprior
}
By using the function laplace with the “int” component, we can approximate the log of the marginal density of .   To illustrate, suppose we wish to compute for a model that assumes that we have 30 degrees of freedom.
> laplace(tlikepost,c(0,0),list(y=y.out,df=30))$int
[1] -89.23436
So for the model , we have log f(y) = -89.23.  To compare this model with, say the model , we repeat this using , compare the log marginal density, and then we can compare the two models by a Bayes factor.
Here are some things to try.
1.  With this data set, try comparing different values for the degrees of freedom parameter.  What you should find is that large values are supported — this indicates that a normal sampling model is appropriate.
2.  Now change the dataset by introducing an outlier and repeat the analysis, comparing different models.  What you should find that a t model with small degrees of freedom is supported with this new dataset.
Advertisements

Advertisements