This guide^{1} helps you think through how to design and analyze experiments when there is a risk of “interference” between units. This has been an important area of research in recent years and there have been real gains in our understanding of how to detect spillover effects. Spillovers arise whenever one unit is affected by the treatment status of another unit. Spillovers make it difficult to work out causal effects (we say why below). Experimentalists worry a lot about them, but the complications that spillovers create are not unique to randomized experiments.

Spillovers refer to a broad class of instances in which a given subject is influenced by whether other subjects are treated.

Here are some examples of how spillovers (or “interference”) might occur:

**Public Health:**Providing an infectious disease vaccine to some individuals may decrease the probability that nearby individuals become ill.**Criminology:**Increased enforcement may displace crime to nearby areas.**Education:**Students may share newly acquired knowledge with friends.**Marketing:**Advertisements displayed to one person may increase product recognition among her work colleagues.**Politics:**Election monitoring at some polling stations may displace fraud to neighboring polling stations.**Economics:**Lowering the cost of production for one firm may change the market price faced by other firms.**Within-subjects experiments across many domains:**the possibility that treatment effects persist or that treatments are anticipated can be modeled as a kind of spillover.

These examples share some features:

- An
*intervention*: the vaccine, increased enforcement, election monitoring; - An
*outcome*: incidence of disease, crime rates, electoral fraud; and - A “
*network*” that links units together: face-to-face social interaction, geographic proximity within a city, road distance between polling stations.

The network is a crucial feature of any spillover analysis. For each unit, it describes the set of other units whose treatment assignments “matter.” To take the education example: it may matter to me if you treat another student in my classroom, but it probably doesn’t matter if you treat a student in a different city. I’m connected to the other students in my classroom but not to students in other cities.

If unaddressed, spillovers “bias” standard estimates of treatment effects (e.g., differences-in-means). “Bias” is in scare quotes because those estimators will return unbiased estimates of causal effects, just not the causal effects that most researchers are interested in.

Imagine an experiment in which there are 50 villages. A treatment (such as a vaccination program) is randomly assigned to some villages but not others. Let’s assume that a village receives spillovers if another village within a 5km radius is treated. Imagine the outcome is some measure of health (such as the prevalence of an infectious disease). If we naively compare treated villages to untreated villages, we may not recover an unbiased estimate of the direct effect of treating a village. The reason is that each village’s outcome is affected not only by whether that village is treated, but also by whether neighboring villages are treated.

In order to see how spillovers can distort estimated treatment effects, consider the graph below:

The graph considers a situation in which the true direct effect of treating a village is 1, and shows how estimated treatment effects can be higher or lower than 1 depending on the direction and size of spillovers as well as the number of villages treated.

In this case, positive spillovers cause a negative bias and vice-versa. This is because when spillovers are positive, the control group mean is inflated, so the difference-in-means is smaller than it otherwise would have been.^{2} The extent of the bias, however, depends on the number of villages treated as well as the magnitude of the spillover effect. In this example, the more villages are treated, the smaller the bias resulting from spillovers. This is because when more villages are treated, both the treatment and control group means are similarly inflated by positive spillovers and deflated by negative spillovers.

Often, evaluators are trying to estimate what would happen if a program were rolled out to everyone. Evidence from an RCT that ignores spillover could greatly over or underestimate the total effects of the intervention.

The assumption that there are no spillovers is known as the **non-interference assumption**; it is part of a somewhat more elaborate assumption sometimes referred to as the **Stable Unit Treatment Value Assumption** (or SUTVA) that is usually invoked in causal inference.

What does the non-interference assumption mean? Subjects can only reveal one of two “potential outcomes”: either their treated outcome or their untreated outcome. Which of these they reveal depends on their own treatment status only. The treatment status of all the other subjects in the experiment doesn’t matter at all.

We can state the non-interference assumption more formally using potential outcomes notation: \(y_i(z_i,Z)=y_i(z′_i,Z′)\), if \(z_i=z′_i\), where \(Z\) and \(Z′\) represent any two possible random assignment vectors. In words, this expression states that subject \(i\) is unaffected by other subjects’ treatment assignments.

How reasonable is the non-interference assumption? The answer depends on the domain. Every study that finds a statistically significant impact of spillovers is providing evidence that the assumption is incorrect in that particular application. Most papers discussing spillovers tend to focus on examples in which the non-interference assumption is false. But other studies suggest that spillovers are sometimes surprisingly weak. Sinclair, McConnell, and Green (2012) for example find no evidence of within-zip code spillovers of experimental encouragements to vote, bolstering the non-interference claims made by the dozens of previous turnout experiments.

The usual non-interference assumption is very strong: it says that there are no spillover effects. When you try to estimate spillovers, you are replacing this strong assumption with a (slightly) weaker one. Perhaps you think that spillovers take place in geographic space — the treatment status of one location may influence the outcomes of nearby units. Allowing spillovers to take place in geographic space requires the assumption that they do not also occur in, for example, social space. This assumption would be violated if the treatment status of, say, Facebook friends in faraway places affects which potential outcome is revealed. To restate this point more generally: When you relax the non-interference assumption, you replace it with a new assumption: no unmodeled spillovers. The modeling of spillovers itself requires strong, often untestable assumptions about how spillovers can and cannot occur.

Suppose we were to model spillovers in the following way. Every unit has four potential outcomes, which we’ll write as \(Y(Z_i,Z_j)\), where \(Z_i\) refers to a unit’s own treatment assignment, and \(Z_j\) refers to the treatment assignment of neighboring units (i.e., other units within a specified radius). \(Z_j=1\) when any neighboring units are treated and \(Z_j=0\) otherwise.

- \(Y_{00} \equiv Y(Z_i=0,Z_j=0)\): Pure Control
- \(Y_{10} \equiv Y(Z_i=1,Z_j=0)\): Directly treated, no spillover
- \(Y_{01} \equiv Y(Z_i=0,Z_j=1)\): Untreated, with spillover
- \(Y_{11} \equiv Y(Z_i=1,Z_j=1)\): Directly treated, with spillover

What assumptions are we invoking here? First, we are stipulating that the treatment assignments of non-neighboring units do not alter a unit’s potential outcomes. Second, we are modeling spillovers as a binary event: either some neighboring unit is treated, or not — we are ignoring the *number* of neighboring units that are treated, and indeed, their relative proximity.

This potential outcome space is already twice as complex as the one allowed by the conventional non-interference assumption. However, it is important to bear in mind that this potential outcome space can be incorrect in the sense that it does not accurately reflect the underlying social process at work in the experiment.

The beauty of randomized experiments is that treatment assignments are directly under the control of the researcher. Interestingly in an experiment, spillovers are also randomly determined by the treatment assignment – after all, you’re assigning some unit’s neighbor to treatment or control on a random basis. The trouble is that the probability that a unit is in a spillover condition is no longer directly under the control of the experimenter. Units that are close to many other units, for example, might be more likely to be in the spillover condition than units that are off on their own.

Take a look at the graph below of 50 units arrayed in geographic space. The 10 red units (both filled and unfilled) were randomly selected for direct treatment and yellow units for control. A filled point represents a unit in a spillover condition, whereas an unfilled point represent a unit that has no treated neighbors within the 5km radius. Notice that the units closer to the center of the graph have a much higher chance of being in a spillover condition than do units towards the edges.

When we estimate causal effects, we have to take account of the probability with which units are assigned to a given treatment condition. Sometimes this is done through matching; sometimes it is done using inverse probability weighting (IPW).

Sometimes, the only practical way to calculate assignment probabilities is through computer simulation (though analytic probabilities can be calculated for some designs). For example you could conduct 10,000 simulated random assignments and count up how often each unit is in each of the four conditions described in the previous section. In R:

```
# Define two helper functions
complete_ra <- function(N,m){
assign <- ifelse(1:N %in% sample(1:N,m),1,0)
return(assign)
}
get_condition <- function(assign, adjmat){
exposure <- adjmat %*% assign
condition <- rep("00", length(assign))
condition[assign==1 & exposure==0] <- "10"
condition[assign==0 & exposure>0] <- "01"
condition[assign==1 & exposure>0] <- "11"
return(condition)
}
N <- 50 # total units
m <- 20 # Number to be treated
# Generate adjacency matrix
set.seed(343)
coords <- matrix(rnorm(N*2)*10, ncol = 2)
distmat <- as.matrix(dist(coords))
true_adjmat <- 1 * (distmat<=5) # true radius = 5
diag(true_adjmat) <-0
# Run simulation 10000 times
Z_mat <- replicate(10000, complete_ra(N = N, m = m))
cond_mat <- apply(Z_mat, 2, get_condition, adjmat=true_adjmat)
# Calculate assignment probabilities
prob00 <- rowMeans(cond_mat=="00")
prob01 <- rowMeans(cond_mat=="01")
prob10 <- rowMeans(cond_mat=="10")
prob11 <- rowMeans(cond_mat=="11")
```

We can display the resulting probabilities plotted below against the number of units within the 5km radius. The further from the center a unit is, the higher the probability of not being in the spillover condition.

We must account for these differential probabilities of assignment using IPW. Below is a block of R code that shows how to include IPWs in a regression context.

```
# Define helper functions
get_prob <- fu
```