A lot has happened in, say, the last 10 years with respect to Bayesian software. This has been a controversial subject and it would be worthwhile to talk about some of the main issues.

1. First, if one was going to design a Bayesian software package, what would it look like? One advantage of the Bayesian paradigm is its flexibility in defining models, doing inference, and checking and comparing models. So any software should allow for the input of “general” models including priors, a variety of methods for fitting the models, and also a variety of methods for doing inference (say, find a marginal posterior for a function of parameters of interest) and checking the validity of the model.

2. Of course, the most popular Bayesian software program is Bugs that includes all of the derivatives of Bugs including WinBugs and OpenBugs. It allows for general model specifications by writing a “model script”, it has a general MCMC computing engine that works for many problems, and it allows for general inference and model checking.

3. Ok, we should all use bugs for Bayesian computing? Actually, I purposely don’t use bugs in my Bayesian class and instead use my package LearnBayes in the R system. Why? Well, although bugs is pretty easy to use, it is sort of a black box where one can use it without understanding the issues in MCMC computing and diagnostics. I want my students to understand the basic MCMC algorithms like Gibbs and Metropolis sampling and get some experience in implementing these algorithms to understand the pitfalls. I would feel more comfortable teaching bugs after the student has had some practice with MCMC, especially for examples where MCMC hasn’t converged or has mixing problems.

4. Another approach is to program MCMC algorithms for specific Bayesian models. This approach is taken using the R package MCMCpack. For example, suppose I want to do a Bayesian linear regression using a normal prior on the regression vector and a inverse gamma prior on the variance. Then there is a function in MCMCpack that will work fine, implement the Gibbs sampling, and give you a matrix of simulated draws and also the prior predictive density value that can be used in model comparison. But suppose I want to use a t prior instead of normal for beta — then I’m stuck. These specific algorithms are useful if you want to fit “standard” models, but we lose the flexibility that is one of the advantages of Bayes.

5. Of course, as the programs become more flexible, it takes a more experienced Bayesian who can actually run the programs. If we wish to introduce Bayes to the masses, maybe we need to provide a suite of canned programs.

It will be interesting to see how Bayesian software will evolve. It is pretty clear that bugs will be a major player in the future, perhaps with a new interface.