When building a naïve Bayes classifier, we assume that the attributes are conditionally independent given the class label. The benefit of this assumption is:
A. It makes it very simple to calculate the joint probabilities
B. You can simply add probabilities for the individual attributes allowing for fast calculation
C. The a priori probababilties for all possible outputs are the same
D. The posteriori probabilities for all possible outputs are the same
E. B & C
Naive Bayes is a classification algorithm for binary (two-class) and multi-class classification problems. The technique is easiest to understand when described using binary or categorical input values.
It is called naive Bayes or idiot Bayes because the calculation of the probabilities for each hypothesis are simplified to make their calculation tractable. Rather than attempting to calculate the values of each attribute value P(d1, d2, d3|h), they are assumed to be conditionally independent given the target value and calculated as P(d1|h) * P(d2|H) and so on.
This is a very strong assumption that is most unlikely in real data, i.e. that the attributes do not interact. Nevertheless, the approach performs surprisingly well on data where this assumption does not hold.
option A. It makes it very simple to calculate the joint probabilities is correct
Get Answers For Free
Most questions answered within 1 hours.