Learning in Probabilistic Models: Maximum Likelihood Estimate
Learn about the maximum likelihood estimate and its generalization.
Maximum likelihood estimate (MLE)
We will now turn to the important principle that will further guide our learning process. Learning means, of course, determining the parameters of the model from example data. Here we introduce the maximum likelihood principle. This states that we choose the parameters in a probabilistic model in the following way:
Given a parameterized hypothesis function , we will choose as parameters the values which make the training data most likely under the assumption of the model.
The principle is stated here again in its most general form for all random data. In our case, we start with a model of the form , which specifies a probabilistic regression model for given input data. This is why the input data appears on the right side of the horizontal bar. However, we will see shortly that in we replace all the data at some point with the training data so that we end up with a function (the likelihood function) that is a function of the parameters. Hence, in this case, it doesn’t matter if we treat the input data as given or as random variables.
Applying the maximum likelihood estimate
Let’s illustrate this on the Gaussian example of a robot that we discussed in the previous lesson with the parameterized model (
Get hands-on with 1400+ tech skills courses.