## Baby's first graphical causal models

We can describe causal models with directed graphs. The graph perspective allows us to specify precise procedures for determining when variables (vertices) are independent (d-separated) and dependent (d-connected).

### Causal graphs

We can represent causal models as directed graphs. The vertices in the graph represent different random variables—causes and effects—and the edges represent causal relationships. If two vertices do not have an edge between them, there is no direct causal relationship between them. For example:

Some technical details:

- These graphs must be acyclic. In a strict sense, something can’t be both a cause and an effect of something else. Thing A at time 1 can effect thing B at time 2 which affects thing A at time 3. Causation only flows forward in time and time is acyclic.
- A path on a directed graph is a sequence of edges joining a sequence of vertices. We can ignore direction of the edges when forming a path.

### Causal triplets

Now that we’ve presented the basic idea of modeling causal systems with graphs, we can start to use graphs as a tool to analyze causal models. We’ll start by looking at the smallest interesting part of a graph—a triplet consisting of three vertices and two edges. Such a triplet can be configured in one of three ways^{1}. We give a name to each triplet and to the center vertex in each triplet.

- Chains
- Chains are the most straightforward. If A causes B and B causes C (A → B → C), then A causes C
^{2}. We call the central vertex B a mediator or a traverse. For example, if smoking causes (increased risk of) cancer and cancer causes (increased risk of) death, then smoking causes (increased risk of) death. - Forks
- The next possible triplet configuration is what we call a fork. If B causes both A and C (A ← B → C), then A and C will not be independent in light of their common cause. For example, if smoking causes both yellowed fingers and lung cancer, we’d expect lung cancer and yellowed fingers to be correlated.
- Inverted forks
- The final possible triplet configuration is what we call an inverted fork. If A causes B and C causes B (A → B ← C), then A and C will be independent. We call the central vertex B a collider. For example, if smoking causes lung cancer and exposure to high doses of radiation also causes lung cancer, we wouldn’t expect smoking and exposure to high doses of radiation to be correlated.

Name of triplet | Name of central vertex | Diagram | Ends (A and C) dependent? |
---|---|---|---|

Chain | Mediator/Traverse | A → B → C | Causally (probably) |

Fork | Confounder/Common cause | A ← B → C | Noncausally |

Inverted fork | Collider/Common effect | A → B ← C | No |

So we can determine the causal and non-causal dependence between three factors by turning them into a causal graph and looking at the configuration of the edges.

### d-separation and d-connection along a path

But that’s not terribly useful in a world overflowing with causes, effects and connections. We need to be able to work with bigger graphs. Our next step on that route is to look at arbitrary paths and determine their dependence. The terms used for this are d-separation and d-connection. “When we say that a pair of nodes are d-separated, we mean that the variables they represent are definitely independent; when we say that a pair of nodes are d-connected, we mean that they are possibly, or most likely, dependent^{3}.” (Pearl, Glymour, and Jewell 2016)

Two vertices on a path are d-connected if they have no colliders between them. So A and D are d-connected in each of:

- A → B → C → D
- A ← B → C → D
- A ← B ← C → D
- A ← B ← C ← D

A and D are d-separated in each of:

- A → B ← C ← D
- A → B ← C → D
- A → B → C ← D
- A ← B → C ← D

I hope the intuition behind this is clear as a fairly straightforward extension of the logic explained with causal triplets.

We also call a path with a collider on it a blocked path.

### d-separation and d-connection on graphs

But linear paths still aren’t that useful. It’s only when we get to full arbitrary directed acyclic graphs that we start to be able to make interesting claims.

In arbitrary graphs, we say that any two vertices are d-connected if they have an undirected path between them which is not blocked (i.e. does not have a collider). If there are no such unblocked paths (i.e. there are no paths at all or all paths have a collider), the two vertices are d-separated.

#### Interactive

To get more of a feeling for these terms, you can fiddle with the widget below.

In the top text area, you can specify a graph as a series of vertices with the edges they point to. So the starting text should be read as “a points to b. b points to nothing. c points to a and b. d points to b”. The graph rendered next to it to help you visualize should update once you defocus the text area.

Below the text area, you can ask whether and how any two nodes are d-connected. If they are d-connected, the connecting paths will be highlighted and the paths will be listed.

Finally, the full list of d-separations is always displayed for the current graph.

### Aside

There are several other fairly effective, fairly short introductions to causal graphical models if this one isn’t doing it for you:

Pearl, Judea, Madelyn Glymour, and Nicholas P Jewell. 2016. *Causal Inference in Statistics: A Primer*. John Wiley & Sons.

Rohrer, Julia M. 2018. “Thinking Clearly About Correlations and Causation: Graphical Causal Models for Observational Data.” *Advances in Methods and Practices in Psychological Science* 1 (1). SAGE Publications Sage CA: Los Angeles, CA: 27–42.

Why three? We have two slots for a directed edge and in each slot an edge can point one of two ways. That creates four options (A ← B ← C; A → B → C; A ← B → C; A → B ← C), but the first two are symmetrical so we don’t bother to distinguish between them.↩

See (Pearl, Glymour, and Jewell 2016) and “intransitive dependence” for the rare cases where this doesn’t apply.↩

The ‘d’ in d-separated and d-connected stands for “directional” according to most people. But at least one place says it stands for “dependence” which I think is much more intuitive.↩