← back to feed

Calibration

Real numbers measuring how well our predicted probabilities track actual market outcomes.

Latest model

Honest caveat: v1 trained on post-hoc snapshots; Brier optimistic. Retrain when forward-looking snapshots accumulate.

Coefficients (raw-feature space)

Sign of each coefficient shows the feature's effect on YES probability. Magnitudes are in the raw feature scale, not standardized.

Calibration curve

predicted probability empirical YES frequency 0 1 0 1 predicted=0.194 observed=0.000 n=3 predicted=0.272 observed=0.270 n=2131 predicted=0.336 observed=0.424 n=66 predicted=0.450 observed=0.442 n=86 predicted=0.736 observed=0.556 n=9
Each circle is a bin of predictions; the x-axis is the mean predicted probability in the bin, the y-axis is the mean actual outcome (YES = 1). Circle size scales with sample count. The dashed diagonal is perfect calibration: points above it are underconfident, points below are overconfident.

Bins

Range Mean predicted Mean observed n samples
0.10 – 0.20 0.194 0.000 3
0.20 – 0.30 0.272 0.270 2131
0.30 – 0.40 0.336 0.424 66
0.40 – 0.50 0.450 0.442 86
0.70 – 0.80 0.736 0.556 9