Causal Simulation Experiments: Lessons from Bias Amplification
š” Research Summary
**
The paper tackles a subtle but important problem in causal inference: the phenomenon of bias amplification, where conditioning on certain observed variablesācalled biasāamplifying variables (BAVs)ācan increase, rather than decrease, bias caused by unmeasured confounding. The authors begin by formalizing a dataāgenerating process that includes a treatmentāÆA, an outcomeāÆY, an unmeasured confounderāÆU, and ten measured variablesāÆBAVā,ā¦,BAVāā. Each BAV influences both A and U but has no direct effect on Y. Consequently, there is an unblocked path AāUāY and ten potentially blockable paths AāBAVįµ¢āUāY. Intuitively, researchers would include the BAVs to block the latter paths, yet theory and prior simulations suggest that doing so may amplify bias from the former path.
To understand why, the authors recast ordinary leastāsquares (OLS) estimators in matrix form and apply the FrischāWaughāLovell (FWL) theorem. They derive the expectation of the naĆÆve estimator (regressing Y on A alone) and the estimator that also includes the BAVs. The naĆÆve estimatorās bias is proportional to βᵤγᵤϲ_U/ϲ_A, while the BAVāadjusted estimatorās bias contains an additional term āγ²_BAVϲ_BAV/(ϲ_Aāγ²_BAVϲ_BAV). The denominator ϲ_Aāγ²_BAVϲ_BAV represents the residual variance of A after removing the linear contribution of the BAVs. When BAVs explain a large share of Aās variance, this denominator shrinks, causing the bias term to blow up. In other words, the more the BAVs āexplainā the treatment, the larger the amplification of bias from the unmeasured confounder.
The authors point out that Pearlās original derivation of bias amplification assumes a linear conditional expectation E
Comments & Academic Discussion
Loading comments...
Leave a Comment