4. Suppose we have a classifier that classifies if an image contains a Human face or not. Suppose we have 100 images, 50 of which contain human faces. If our classifier accurately classifies that 30 images contains human faces, but at the same time wrongly classifies that 30 images contains human faces. What is the precision and recall. (10 points)
5. Suppose we use k-fold cross validation, how many times should we train the classifier? (10 points)
6. I have two machine learning algorithms, A1 and A2. A1 has a training error of which is smaller than A2’s training error? Can we conclude that A1 is the better algorithm? Briefly explain your reasoning. (10 points)
Precision = TP/TP+FP = 30/30+30 = 0.5
Recall = TP/TP+FN = 30/30+50 = 0.375
The number of folds is usually determined by the number of instances contained in your dataset. For example, if you have 10 instances in your data, 10-fold cross-validation wouldn't make sense. k-fold cross validation is used for two main purposes, to tune hyper parameters and to better evaluate the performance of a model.
No, we cannot determine that A1 is better. This is because, we do not know how the data is and how it is split to be used for training and testing. There are also various other factors which are involved.
Get Answers For Free
Most questions answered within 1 hours.