educative.io

Confusion in the weighted metric to measure the performance of the feed

Hi Team,
I am a bit confused with the usage of weighted engagement metric here, let me paste the paragraph from the content:

The user engagements are aggregated across all users’ feeds over a specific period of time. In the above diagram, two-thousand tweets were viewed in a day on Twitter. There were a total of seventy likes, eighty comments, twenty retweets, and five reports. The weighted impact of each of these user engagements is calculated by multiplying their occurrence aggregate by their weights. In this instance, Twitter is focusing on increasing “likes” the most. Therefore, “likes” have the highest weight. Note that the negative user action, i.e., “report”, has a negative weight to cast its negative impact on the score.

While I understand the mechanics to compute it, I fail to make sense how to use this to optimise the feed. Concretely, here are some questions that I have in mind:

  1. Is this weighted impact score used for comparing between Feed-A vs Feed-B?
  2. Is this used for online metrics? Which means, during A/B testing, you can compare the overall weighted engagement between Baseline and Experiment?
  3. Is this engagement metric specific to the user (doesn’t seem so)

Thanks!

Hi @Chris2, Thanks for reaching out to us.
1. Weighted impact is basically calculating the weight to cast its positive/negative impact on the score. The weighted impacts are then summed up to determine the score. Then to normalize the score with the total number of active users. This way, you obtain the engagement per active user, making the score comparable.
2. Yes, during A/B testing, we can compare the overall weighted engagement between Baseline and Experiment.
3. The user engagements are aggregated across all users’ feeds over a specific period of time only.

Hope this helps.
Happy Learning :slight_smile:

hi @Muntha_Amjad , thanks for the reply, this is helpful!

a quick follow up question; is this designed to be used as the final label in our training data? I’d like to understand that in practice, when we are asked to improve the engagements and we have multiple feedbacks/interactions to indicate positive and negative engagements, how should we model this as a supervised ML problem?