educative.io

Educative

Model recalibration in add prediction

Not clearly getting why this is needed? can someone please explain?


Type your question above this line.

Course: https://www.educative.io/collection/10370001/6237869033127936
Lesson: https://www.educative.io/collection/page/10370001/6237869033127936/6476235518509056

Hey @Ptk84!
Model recalibration refers to comparing the actual output and the expected output given by a prediction model. In model recalibration, we try to improve our model. The distribution and behavior of the probability predicted are similar to the distribution and behavior of probability observed in training data.
Let me give you an example that helps you to understand this better.
Suppose we have 10 samples of data points. Out of these 10 points, 7 belong to the positive class, and 3 to the negative class. Then a fraction of positive points is 0.7 (observed probability). This means that there is a 70% probability that a point in this sample will get a class label as positive. Now we will take the average of probability estimates (predicted probability) predicted by the model. We expect this average to be around 0.7. If this value is far from 0.7, our model is not calibrated.
We have to recalibrate the model to balance this, observed probability, and predicted probability.

1 Like

Thanks Aiman_Fiaz. But why dont we use F1 score in such case?