The Right Amount of Wrong: Calibrating Predictive Models

graph showing predicted versus actual win percentage

The Right Amount of Wrong: Calibrating Predictive Models How do we evaluate a predictive model? The most intuitive answer is accuracy—a simple measure of what percent of the time your model’s prediction is correct. Accuracy is a widely used evaluation metric, largely because it’s so easy to understand. However, a Continue reading The Right Amount of Wrong: Calibrating Predictive Models