scikit-survival 0.25.0 with improved documentation released | Sebastian Polsterl
Briefly

scikit-survival 0.25.0 adds support for scikit-learn 1.7 while maintaining compatibility with 1.6 and delivers a complete API documentation overhaul to improve clarity and consistency. The user guide summarizes performance metrics for survival models, grouping them into concordance index (C-index), cumulative/dynamic AUC, and the Brier score. cumulative_dynamic_auc(), brier_score(), and integrated_brier_score() are available for time-dependent AUC and Brier computations, including integrated measures over time. Survival estimators provide predict() that returns either unit-less risk scores or predicted event times. Higher risk scores indicate increased event risk and are meaningful primarily for ranking samples.
One of the biggest pain points for users seems to be understanding which metric can be used to evaluate the performance of a given estimator. The user guide now summarizes the different options. Which Performance Metrics Exist? The performance metrics for evaluating survival models can be broadly divided into three groups: Concordance Index (C-index): Measures the rank correlation between predicted risk scores and observed event times. Two implementations are available in scikit-survival:
Cumulative/Dynamic Area Under the ROC Curve (AUC): Extends the AUC to survival data, quantifying how well a model distinguishes subjects who experience an event by a given time from those who do not. It can handle time-dependent risk scores and is implemented in cumulative_dynamic_auc(). Brier Score: An extension of the mean squared error to right-censored data. The Brier score assesses both discrimination and calibration based on a model's estimated survival functions.
Read at Sebastian Polsterl
[
|
]