You’ve found the online causal inference course page. Although, the course text is written from a machine learning perspective, this course is meant to be for anyone with the necessary prerequisites who is interested in learning the basics of causality. I do my best to integrate insights from the many different fields that utilize causal inference such as epidemiology, economics, political science, machine learning, etc. You can see the tentative course schedule below.
You can join the course Slack workspace where you can easily start discussions with other people who are interested in causal inference. For information about office hours, see the office hours section below. If you’re interested in leading a reading group discussion, check out the suggested reading group papers to see if one piques your interest. When emailing me about this course, please include “[Causal Course]” at the beginning of your email subject to help make sure I see your email. If you want to receive course updates, sign up for the course mailing list. The main textbook we’ll use for this course is Introduction to Causal Inference (ICI), which is a book draft that I’ll continually update throughout this course.
Course Schedule (tentative)
Note about slides: they currently don’t work well with Adobe Acrobat, though they seem to work with other PDF viewers.
|Week||Topics||Lecture||Readings||Reading Group Paper|
|Chapter 1 of ICI||None|
|September 7||Potential Outcomes
A Complete Example with Estimation
|Chapter 2 of ICI||Does obesity shorten life? The importance of well-defined interventions to answer causal questions (Hernán & Taubman, 2008)|
|September 14||Graphical Models
|Chapter 3 of ICI||Does Obesity Shorten Life? Or is it the Soda? On Non-manipulable Causes (Pearl, 2018)|
|September 21||Backdoor Adjustment
Structural Causal Models
|Chapter 4 of ICI||Single World Intervention Graphs: A Primer (Richardson & Robins, 2013)|
|September 28||Randomized Experiments
|Chapters 5-6 of ICI|
|October 5||Estimation and CATEs
Susan Athey Guest Talk -
Causal Trees and Forests
(Oct 8th at 3 - 4 pm EDT)
|Chapter 7 of ICI|
|October 12||Unobserved Confounding,
|October 19||Instrumental Variables
Alberto Abadie Guest Talk -
|November 2||Causal Discovery without Experiments|
|November 9||Causal Discovery with Experiments|
Mediation and Path-Specific Effects
|November 30||Yoshua Bengio Guest Talk -
Causal Representation Learning
(Dec 1st at 1 - 2:30 pm EST)
Course Mailing List
Sign up for the course mailing list to receive updates about the course:
Draft of first 7 chapters (will be continually updated with new chapters throughout the course):
This is a book draft, so I greatly appreciate any feedback you’re willing to send my way. If you’re unsure whether I’ll be receptive to it or not, don’t be. Please send any feedback to me using the “Book” option of the feedback form. Feedback can be at the word level, sentence level, section level, chapter level, etc. Here’s a non-exhaustive list of useful kinds of feedback:
- Some part is confusing.
- You notice your mind start to wonder or don’t feel motivated to read some part.
- Some part seems like it can be cut.
- You feel strongly that some part absolutely should not be cut.
- Some parts are not connected well.
- When moving from one part to the next, you notice that there isn’t a natural flow.
- A new active reading exercise you thought of.
Date/time: Thursdays 10-11 am Eastern (EST/EDT) (subject to change)
Plan: I will go through the most upvoted questions on YouTube and do a deeper response and/or a bit of interactive Q&A with the askers of those questions.
Recording: I plan to record these and upload them to YouTube. If you end up speaking in the meeting and you prefer to not be in the recording, let me know, and I’ll make sure to edit you out before I upload it.
There is one main prerequisite: basic probability. This course assumes you’ve taken an introduction to probability course at the undergraduate level or have had equivalent experience. Topics from statistics and machine learning will pop up in the course from time to time, so some familiarity with those will be helpful, but is not necessary. For example, if cross-validation is a new concept to you, you can learn it relatively quickly at the point in the course that it pops up. And in Section 2.4 of the book, we give a primer on some statistics terminology that we’ll use.
Q: Where should I ask questions about a given lecture?
A: Use the YouTube comment selection below the relevant video. I check it 2-3 times per day.
Q: Is this course for credit?
Q: Is this course free?
Q: What time is the course?
A: Only the guest talks will have specific times (listed in the schedule). The regular lecture videos won’t be live and will usually be uploaded to YouTube on Mondays. The time for office hours is to be determined on the course Slack.
Q: I’m not receiving course emails.
A: Email me with “[Causal Course]” at the beginning of your email subject, and I’ll fix it.
If you have any feedback about the course to send my way, I welcome it! Please send it here. You can include your name or not include your name. Either works.
Potential Reading Group Papers by Week
We will have a small weekly reading group that runs in parallel to the course. Before any given week’s reading group meeting, 1-3 people will have read the week’s paper in detail and already thought about discussion topics. These 1-3 people will then lead a discussion of a small number of people who have all made themselves familiar with the paper. The discussion group will be kept small (at most 15) in order to facilitate quality discussion. You can ensure that you have a place in the discussion group every week you’d like by signing up to be a discussion leader for at least one week. Below, I give a list of potential reading group papers, organized by week/topic, just like the course schedule is. You can email me at firstname.lastname@example.org to let me know that you’d like to lead a certain week’s discussion, which paper(s) you’re considering, or to discuss other papers you’d like to discuss that are not on the list.
- Motivation and Preview - No reading group
- Potential Outcomes
- Graphical Models and SCMs
- Randomized Experiments, Frontdoor Adjustment, and do-calculus
- Estimation and Conditional Average Treatment Effects
- Estimating individual treatment effect: generalization bounds and algorithms (Shalit, Johansson, & Sontag, 2017)
- Adapting Neural Networks for the Estimation of Treatment Effects (Shi, Blei, Veitch, 2019)
- Generalized Random Forests (Athey, Tibshirani, Wager, 2019)
- Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning (Künzel et al., 2017) (caution: not about meta-learning in the ML sense)
- Sensitivity Analysis
- Making sense of sensitivity: extending omitted variable bias (Cinelli & Hazlett, 2019)
- Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding (Veitch & Zaveri, 2020)
- An Introduction to Sensitivity Analysis for Unobserved Confounding in Non-Experimental Prevention Research (Liu, Kuramoto, & Stuart, 2013)
- Sensitivity Analysis of Linear Structural Causal Models (Cinelli et al., 2019)
- Instrumental Variables, Regression Discontinuity, Difference-in-Differences, and Synthetic Control
- Improving Causal Inference: Strengths and Limitations of Natural Experiments (Dunning, 2007)
- Alternative Causal Inference Methods in Population Health Research: Evaluating Tradeoffs and Triangulating Evidence (Mattay et al., 2019)
- Deep IV: A Flexible Approach for Counterfactual Prediction (Hartford et al., 2017)
- Regression Discontinuity Designs in Economics (Lee & Lemieux, 2010)
- Synthetic Controls (there are several different Abadie papers; message me, if you’re interested in this topic)
- Causal Discovery without Experiments
- Inferring causation from time series in Earth system sciences (Runge et al., 2019)
- Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks (Mooij et al., 2016)
- Do-calculus when the True Graph Is Unknown (Hyttinen, Eberhardt, Jarvisalo, 2015)
- Review of Causal Discovery Methods Based on Graphical Models (Glymour, Zhang, & Spirtes, 2019)
- Causal inference by using invariant prediction: identification and confidence intervals (Peters, Bühlmann & Meinshausen, 2016)
- Nonlinear causal discovery with additive noise models (Hoyer et al., 2008)
- Causal Discovery from Heterogeneous/Nonstationary Data with Independent Changes (Huang et al., 2020)
- Causal Discovery with Experiments
- Experiment Selection for Causal Discovery (Hyttinen, Eberhardt, Hoyer, 2013)
- Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs (Hauser & Bühlmann, 2012)
- Characterizing and Learning Equivalence Classes of Causal DAGs under Interventions (Yang, Katcoff, & Uhler, 2018)
- Joint Causal Inference from Multiple Contexts (Mooij, Magliacane, & Claassen, 2020)
- Transportability and Transfer Learning
- External Validity: From Do-Calculus to Transportability Across Populations (Pearl & Bareinboim, 2014)
- Causal inference and the data-fusion problem (Bareinboim & Pearl, 2016)
- On Causal and Anticausal Learning (Schölkopf et al., 2012)
- Domain Adaptation under Target and Conditional Shift (Zhang et al., 2013)
- Multi-Source Domain Adaptation: A Causal View (Zhang, Gong, & Schölkopf., 2015)
- Invariant Models for Causal Transfer Learning (Rojas-Carulla et al., 2016)
- Domain Adaptation As a Problem of Inference on Graphical Models (Zhang et al., 2020)
- Domain Adaptation by Using Causal Inference to Predict Invariant Conditional Distributions (Magliacane et al., 2018)
- Counterfactuals, Mediation, and Path-Specific Effects
- TBD - Overflow Week
- Causal Representation Learning