Naive bayes Part-1
Naive Bayes algorithms are based on Bayes’ Rule, which we discussed in the previous lessons, and it works very well for Natural Language Problems like Document Classification and Spam Filtering. We’ll uncover more of the details behind it in this lesson.
Naive Bayes
The Naive Bayes Theorem is based on Bayes’ Rule which is stated as below.
“Bayes’ theorem (alternatively Bayes’ law or Bayes’ rule) describes the probability of an event, based on prior knowledge of conditions that might be related to the event.”
Bayes theorem is stated as below.
-
is the probability of . It is called Evidence.
-
is the conditional probability of , given has occurred. It is called the Posterior Probability, meaning the probability of an event after evidence is seen.
-
is the conditional probability of , given has occurred. It is called the Likelihood.
-
is the probability of . It is called the Prior Probability, meaning the probability of an event before evidence is seen.
Naive Bayes methods go with the “naive” assumption of conditional independence between every pair of features given the value of the class variable.
Mathematical intuition
We will be going with a fictional dataset for the playing of Golf Game, as seen below.
Outlook | Temperature | Humidity | Windy | Play Golf |
---|---|---|---|---|
Rainy | Hot | High | False | No |
Rainy | Hot | High | True | No |
Overcast | Hot | High | False | Yes |
Sunny | Mild | High | False | Yes |
Sunny | Cool | Normal | False | Yes |
Sunny | Cool | Normal | True | No |
Overcast | Cool | Normal | True | Yes |
Rainy | Mild | High | False | No |
Rainy | Cool | Normal | False | Yes |
Sunny | Mild | Normal | False | Yes |
Rainy | Mild | Normal | True | Yes |
Overcast | Mild | High | True | Yes |
Overcast | Hot | Normal | False | Yes |
Sunny | Mild | High | True | No |
-
In the above dataset the independent features() are Temperature, Humidity, Outlook, and Windy.
-
In the above dataset the dependent feature() is Play Golf.
Assumption of Naive Bayes
Naive Bayes algorithms assume that each input feature is independent, and they make an equal contribution to the outcome (Play Golf). The assumptions made by the Naive Bayes algorithms are generally not true in the real world examples but they work well in practice.
Applying Bayes’ Theorem
Applying the Bayes Theorem we get the following representation.
where, is class variable and is a dependent feature vector (of size ) where:
From the above table, taking the first row.
P(y|X) here means, the probability of “Playing golf” given that the weather conditions are “Overcast Outlook”, “Temperature is hot”, “High humidity” and “no wind”.
Applying the Independence Assumption
If one event is not dependent on the other event , then the events are said to be independent and their joint probability is calculated as below.
Applying the Independent Assumption to the above equation, we move as follows.
=
which can be written as
=
Now as the denominator remains contant for a given input. We will remove that term.
Now, we need to create a classifier model. For this, we find the probability of given set of inputs for all possible values of the class variable y and pick up the output with maximum probability. This can be expressed mathematically as:
So, finally, we are left with the task of calculating and . We can use Maximum A Posteriori (MAP) estimation to estimate and .
Please note that is also called class probability and is called conditional probability.
The different naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution of .
Applying the Mathematical Intuition
We can construct the following tables for easing the calculations from the above dataset.
Get hands-on with 1400+ tech skills courses.