...

/

Solve Problems in Bayesian Inference

Solve Problems in Bayesian Inference

In this lesson, we will review concepts learned so far and learn how to solve problems using Bayesian Inference.

Review of the Concepts Learned So Far

Since we have covered many new concepts this would be a good time to quickly review where we’re at:

  1. We’re representing a particular discrete probability distribution P(A)P(A) over a small number of members of a particular type AA by IDiscreteDistribution<A>.
  2. We can condition a distribution — by discarding certain possibilities from it — with Where.
  3. We can project a distribution from one type to another with Select.
  4. A conditional probability P(BA)P(B|A) — the probability of BB given that some AA is true — is represented as likelihood function of type Func<A, IDiscreteDistribution<B>>.
  5. We can “bind” a likelihood function onto a prior distribution with SelectMany to produce a joint distribution.

These are all good results and we hope you agree that we have already produced a much richer and more powerful abstraction over randomness than System.Random provides.


Bayes’ Theorem

In this lesson, everything is really going to come together to reveal that we can use these tools to solve interesting problems in probabilistic inference.

To show how we’ll need to start by reviewing Bayes’ Theorem.

If we have a prior P(A)P(A), and a likelihood P(BA)P(B|A), we know that we can “bind” them together to form the joint distribution. That is, the probability of AA and BB both happening is the probability of AA multiplied by the probability that BB happens given that AA has happened:

P(A&B)=P(A)×P(BA)P(A\&B) = P(A) \times P(B|A) ...