Concluding our baseball example, recall that we observed the home run rates for 20 players in the month of April and we were interested in predicting their home run rates for the next month. Since we have collected data for both April and May, we can check the accuracy of three prediction methods.
1. The naive method would be to simply use the April rates to predict the May rates. Recall that the data matrix is d where the first column are the at-bats and the second column are the home run counts.
2. A second method, which we called the “pooled” method, predicts each player’s home run rate by the pooled home run rate for all 20 players in April.
3. The Bayesian method, predicts a player’s May rate by his posterior mean of his true rate lambda_j.
One measure of accuracy is the sum of absolute prediction errors.
The May home run rates are stored in the vector may.rates. We first compute the individual prediction errors for all three methods.
We use the apply statement to sum the absolute errors.
apply(errors,2,sum) # sum of absolute errors for all methods
error1 error2 error3
0.3393553 0.3111552 0.2622523
By this criterion, Bayes beats Pooled beats Naive.
Finally, suppose we are interested in predicting the number of home runs hit by the first player Chase Utley in May. Suppose we know that he’ll have 115 at-bats in May.
1. We have already simulated 10,000 draws from Utley’s true home run rate lambda1 — these are stored in the first column of the matrix lam vector.
2. Then 10,000 draws from Utley’s posterior predictive distribution are obtained by use of the rpois function.
We graph this predictive distribution by the command
plot(table(ys1),xlab=”NUMBER OF MAY HOME RUNS”)
The most likely number of May home runs is 3, but a 90% prediction interval is [1, 9].