causalCalibration

Overview

causalCalibration is a workflow-oriented package for calibrating heterogeneous treatment effect predictions after you have already trained a treatment-effect model.

It is for applied researchers who:

  • already have treatment-effect predictions from some learner,
  • want calibrated effect scores they can interpret more directly,
  • want to avoid spending a separate holdout split on calibration when cross-fitted predictions are available,
  • want diagnostics that quantify how much miscalibration remains.
  • want guidance about overlap, target population, and whether loss="dr" or loss="r" is the better fit.

Main workflows

Standard calibration

Use fit_calibrator() when you have one prediction per observation and want a single calibration map.

  • a vector of effect predictions,
  • treatment and outcome data,
  • nuisance estimates for the chosen loss.

Cross-calibration

Use fit_cross_calibrator() when your underlying HTE model was trained with cross-fitting and you want to fit and calibrate in sample without carving out a separate calibration set.

Cross-calibration uses:

  • pooled out-of-fold predictions to fit the calibration map,
  • an n x K matrix of fold-specific predictions to produce calibrated predictions.

Documentation map

  • Getting Started: installation, mental model, and the package workflow at a glance
  • Standard Calibration: when to use ordinary calibration and how to fit it
  • Cross-Calibration: the central in-sample workflow for cross-fitted predictors
  • Diagnostics: how to quantify remaining miscalibration, report original-population vs overlap-targeted error, and read the BLP slope diagnostic for effect heterogeneity
  • Losses and Methods: how to choose dr vs r and which calibrator to use
  • API Reference: function-by-function contracts, argument requirements, and common mistakes

Source workflows

Method references