ML Naive Bayes

Read this in "about 3 minutes".

Naive Bayes is a classifier which is based on Bayes equation and conditional independence.

Input & output

input: , is the dimension of feature vector . output: , is the class label in classes.

Main idea

Based on Bayes equation

is the likelihood, is prior probability. is the evidence, and is the posterior probability. The likelihood is the conditional probability, the evidence is the const value and P(w|x) is the probability we wanna solve given likelihood and evidence.
Here to fresh your memory, you can think of as a phenomenon and as a rule.

Learning method

1.First we have to understand from the very beginning, we are definitely want to figure out the conditional probability. The conditional probability is meant for every class in every dimension of feature vector , the appearance of all possible values.
Then we can apply conditional independence theory to calculate the combined probability.

Then for every dimension in , have values, the total number of parameters for K classes is . It’s really hard to predict so many parameters.

2.Instead, we can figuring out .
I think this alternative method is not to cover all possible , it only focuses the instance we want to predict.

3.So we can calculate for every class and find the maximum,

Ignoring the same denominator, we get:

Finally:) we easily apply the Bayes equation to decide the class of give instance .

Parameter estimation

1.Maximum likelihood
To calculate:


That is to calculate


2.Bayes estimation


The only difference is to add Laplace smoothing, to make


Example in practise

IPython file



Goodbye!


Author

Typing Theme

Lorem ipsum dolor sit amet, consectetur adipisicing elit. Tempora non aut eos voluptas debitis unde impedit aliquid ipsa.

 The comment for this post is disabled.