...

/

Interpreting Confidence Intervals

Interpreting Confidence Intervals

Get an overview of how to interpret the confidence interval.

Now that we’ve shown how to construct confidence intervals using a sample drawn from a population, let’s now focus on how to interpret their effectiveness. The effectiveness of a confidence interval is judged by whether or not it contains the true value of the population parameter. Going back to our fishing analogy in the Understanding Confidence Intervals lesson, this is like asking, “Did our net capture the fish?”

So, for example, does our percentile-based confidence interval of (1991, 1999) capture the true mean year μ\mu of all US pennies? We’ll never know, because we don’t know what the true value of μ\mu is. After all, we’re sampling to estimate it!

In order to interpret a confidence interval’s effectiveness, we need to know what the value of the population parameter is. That way we can say whether or not a confidence interval has captured this value.

Let’s revisit our sampling bowl. What proportion of the bowl’s 2,400 balls are red? Let’s compute this:

Press + to interact
bowl %>%
summarize(p_red = mean(color == "red"))

In this case, we know what the value of the population parameter is—we know that the population proportion pp is 0.375. In other words, we know that 37.5% of the bowl’s balls are red.

As we stated, the sampling bowl exercise doesn’t really reflect how sampling is done in real life, but rather was an idealized activity. In real life, we won’t know what the true value of the population parameter is, therefore the need for estimation.

Let’s now construct confidence intervals for pp using our 33 groups of friends’ samples from the bowl. We’ll then see if the confidence intervals captured the true value of pp, which we know to be 37.5%. That’s to say: Did the net capture the fish?

Did the net capture the fish?

Recall that we had 33 groups of friends each take samples of size 50 from the bowl and then compute the sample proportion of red balls ...