## Grading homework …

My recent homework in my Bayesian class was on several one-parameter problems where R was used in the posterior and predictive calculations.

There was much variation in what was turned in. One student’s homework consisted of 37 pages and every simulated parameter value was displayed. Another student’s turn-in was 4 pages where all of the R work was displayed (in a 2 point font) on a single page.

Here are some guidelines for what I’d like a student’s homework to look like.

1. Homework consisting completely of R work (input and output) is clearly inappropriate.

2. The answers to the exercise questions should be written in paragraph form with complete sentences. Imagine that the student was supposed to report to his/her boss about what she learned. She or he would write a report that describes in words what was learned.

3. Obviously, I’d like to see that the student is using R functions in a reasonable way. But I’m primarily interested in a copy of what the student entered and the relevant output. For example, suppose the student is summarizing a beta posterior using simulation. I don’t want to see the 1000 simulated draws, but the student could convince me that he or she is getting reasonable results by showing several summaries, such as a posterior mean and posterior standard deviation.

4. If I assign a homework with 8 exercises, then I think that 3 pages is too brief (not enough said), but over 20 pages indicates too much irrelevant R output is included. The student needs a reasonable balance. Maybe 10 pages would be an optimal length of a turn-in — maybe longer if the student wishes to include some graphs.

By the way, it was interesting how one particular question was answered.

In Chapter 2, exercise 4, I asked the student to contrast predictions using two different priors, one a discrete one, and the second a beta prior. Most of the students were successful in computing the predictive probabilities using the two priors. But there were different comparisons that were done.

1. DISPLAY? Some students just displayed the two probability distributions and said they were similar. Let’s say that this approach wasn’t that persuasive.

2. GRAPH? Some students graphed the two sets of predictive probabilities on a single graph. Assuming the graph is readable, that is a much better idea. One can quickly see if the distributions are similar by looking at the graph.

3. SUMMARIZE? Another approach is to compare the two distributions by summarizing each distribution in some way. For example, one could compute the mean and standard deviation of each distribution? Or one could compute a 90% predictive interval for each distribution?

What would I prefer? It is pretty obvious that simply displaying the two probability distributions is not enough. I think graphing the two distributions and summarizing the distributions is a good strategy. Otherwise, you really aren’t answering the question of whether the two distributions are similar.