educative.io

Educative

Quiz on logloss

In question 2
Classifier 1 categorical cross-entropy loss is 1.822 and Classifier 2 categorical cross-entropy loss is 2.0794 so Classifier 1 has better categorical cross-entropy loss.
So in question 3, in the question of which classifier has better accuracy, it should be classifier 1 as its loss is less than that of classifier 2’s. But the answer is opposite in question 3

Hi @Shruti_Shrestha, Thanks for reaching out to us.
Let’s take an example to understand why it is mentioned in the quiz that Classifier 2 has better accuracy:
Suppose the following loss values of two classifiers for a balanced dataset containing either cats or dogs.

  • Classifier 1 = 80/100 = 0.8
  • Classifier 2 = 95/100= 0.95

However, in the 80 images, classifier 1 gets right, it is extremely confident (for instance when it thinks an image is of a cat it is 100% sure that’s the case), and in the 20 it gets wrong it was not at all confident (e.g. when it said a cat image contained a dog it was only 51% sure about that). In comparison, classifier 2 is extremely confident in its 5 wrong answers (it’s 100% convinced that an image that actually shows a dog is a cat) and was not very confident about the 95 it got right. In this case, classifier 2 would have a worse loss.
Apparently, it seems that classifier 1 has less loss but actually classifier 2 has a worse loss which I have explained earlier. Here, classifier 2 obviously has higher accuracy. The same logic applies to your mentioned case that’s why in your case classifier 2 has better accuracy instead of classifier 1.
Hope it will help :blush: