Maximum Likelihood Estimation and Its Applications
Get an introduction to maximum likelihood estimation and see examples using an interactive code in Python.
What is maximum likelihood estimation?
Maximum likelihood estimation (MLE) is a statistical technique for estimating the parameters of a given model by maximizing the likelihood so that the observed data is most probable. The basic idea behind MLE is to find the parameter values that make the observed data most likely. In other words, the goal is to find the parameter values that maximize the likelihood of observing the data. To do this, the model is first specified, then the likelihood of the data given to the model is computed. The parameters of the model are then adjusted to maximize the likelihood. The parameter values that maximize the likelihood are then taken as the estimates of the model parameters.
Assumptions
The main assumption required to consider before using MLE is that the data is i.i.d., that is, it is independent and identically distributed:
Identically distributed means that there are no overall trends—the distribution doesn’t fluctuate and all items in the sample are taken from the same probability distribution.
Independent means that the sample items are all independent events. In other words, they are not connected to each other in any way; knowledge of the value of one variable gives no information about the value of the other and vice versa.
Parametric and nonparametric MLE
There are two types of MLE, parametric and nonparametric. The comparison between the two types is given in the table below:
Get hands-on with 1400+ tech skills courses.