Home > Hierarchical modeling > Estimating home run rates

## Estimating home run rates

Since the World Series starts tomorrow, it seems appropriate to use baseball data to illustrate hierarchical modeling.

I collected home run for all nonpitchers in the 2007 baseball season.  We observe the pair $(HR_i, AB_i)$, the number of home runs and the number of at-bats (opportunities) for the $i$th player.

Who was the best home run hitter this season?  Major League Baseball’s criteria is simply the number of home runs hit.  Alex Rodriquez hit the most home runs  (54) that season.

Was A-Rod the best home run hitter in 2007?  Since the number of at-bats varies among players, maybe it makes more sense to consider the player who had the highest home run rate RATE = HR/AB.

Who had the highest home run rate in the 2007 season?  Actually, it was Charlton Jimerson who had 1 home run in 2 at-bats for a home run RATE = 1/2 = 0.500.

Should Jimerson be given the home run hitting crown?  Of course not, since he had only two at-bats.  It makes more sense to restrict our attention to hitters who had more opportunities.  But how many opportunities (AB) is enough?

Below I’ve plotted the home run rate HR/AB against the square root of the number of AB for all nonpitchers in the 2007 season.

This figure dramatically illustrates the problem of interpreting a collection of rates when the sample sizes vary.  The home run rates are much more variable for small sample sizes.  If one wishes to simultaneously estimate the probabilities of a home run for all players, it seems desirable to adjust the small sample rates towards the average rate.  We’ll see that we can accomplish this by means of an exchangeable prior model.