Attacking the Final Problem
In this lesson, we will solve the problem of finding a good helper distribution `q`.
We'll cover the following
In the previous lesson, we finally wrote a tiny handful of lines of code to implement importance sampling correctly; if we have a distribution p
that we’re sampling from, and a function f
that we’re running those samples through, we can compute the expected value of f
even if there are “black swan” regions in p
.
Helper Distribution q
All we need is a helper distribution q
that has the same support as p
, but no black swans.
Great. How are we going to find that?
A variation of this problem has also occurred before: what should the initial and proposal distributions be when using Metropolis? If we’re using Metropolis to compute a posterior from a prior, then we can use the prior as the initial distribution. But it’s not at all clear in general how to choose a high-quality proposal distribution; there’s some art there.
There is also some art in choosing appropriate helper distributions when doing importance sampling. Let’s once again take a look at our “black swan” situation:
Get hands-on with 1400+ tech skills courses.