Spillover effect in a/b testing at social network envirnoment

In the ‘Feed Based System - Online Experimentation’ section, I think it would be nice to add some discussion or info about the likely spillover effect in a/b testings in social network environment.

For example, in this Twitter feed scenario, when we directly randomly split 1% of users into control and treatment, some of those in control can be followers of those in treatment, and vice versa. When the treatment changes the behaviour or boosts engagement level of those in variant group, their friends in control may think ‘Oh John has commented/liked my tweet. Let me comment something back.’ Now the engagement metrics go up in both groups. Then one of the assumptions in causal inference setting like a/b testing - SUTVA (Stable Unit Treatment Values Assumption) - is violated. Since in this Twitter case, we use only 1% of traffic instead of 100%, a relative simple fix could be to make sure users in control and variant are not followers to each other or at least X friends’ apart in a social graph.

Linkedin and Facebook have done some research and written some paper on how they handle this spillover effect, with some fancier fixes. It is also discussed in Chapter 22 ‘Leakage and Interference between Variants’ in the book ‘Trustworthy online controlled experiments’, where I initially learned about this.