Causal Reasoning in Graphical Time Series Models
We propose a definition of causality for time series in terms of the effect of an intervention in one component of a multivariate time series on another component at some later point in time. Conditions for identifiability, comparable to the back-door and front-door criteria, are presented and can also be verified graphically. Computation of the causal effect is derived and illustrated for the linear case.
💡 Research Summary
The paper “Causal Reasoning in Graphical Time Series Models” develops a rigorous framework for defining and identifying causal effects in multivariate time‑series settings. The authors begin by formalising causality as the effect of an intervention on one component of a stationary multivariate series X = {X(t)} on another component at a later time point. An intervention is represented by a strategy variable σₐ(t) that can take the values “idle” (no intervention) or various forms of active manipulation: atomic (forcing Xₐ(t) to a fixed value), conditional (forcing Xₐ(t) to a function of past variables), and random (forcing Xₐ(t) to follow a known distribution). The strategy variable is not random; it indexes different regimes under which the system may evolve.
Two key assumptions about interventions are imposed. First, the intervention at time t does not depend on any past variables or on contemporaneous variables other than Xₐ(t) itself, which rules out instantaneous causality. Second, future variables are independent of the intervention given the present state X(t), ensuring that the intervention’s influence propagates only through the system’s dynamics. These assumptions are justified by embedding all potentially relevant (including unobserved) variables in the full system V, thereby treating the observed multivariate series as a de‑confounded subsystem.
Causal effect is defined as the average causal effect (ACE):
ACEₛ = E_{σₐ(t)=s}
Comments & Academic Discussion
Loading comments...
Leave a Comment