Low outcome prevalence, often observed with opioid-related outcomes, poses an underappreciated challenge to accurate predictive modeling. Outcome class imbalance, where non-events (i.e., negative class observations) outnumber events (i.e., positive class observations) by a moderate to extreme degree, can distort measures of predictive accuracy in misleading ways and make the overall predictive accuracy and the discriminatory ability of a predictive model appear spuriously high. We conducted a simulation study to measure the impact of outcome class imbalance on predictive performance of a simple SuperLearner ensemble model and suggest strategies for reducing that impact.
Using a Monte Carlo design with 250 repetitions, we trained and evaluated these models on four simulated data sets with 100,000 observations each: one with perfect balance between events and non-events, and three where non-events outnumbered events by an approximate factor of 10:1, 100:1, and 1000:1, respectively.
We evaluated the performance of these models using a comprehensive suite of measures, including measures that are more appropriate for imbalanced data.
Increasing imbalance tended to spuriously improve overall accuracy (using a high threshold to classify events vs. non-events, overall accuracy improved from 0.45 with perfect balance to 0.99 with the most severe outcome class imbalance), but diminished predictive performance was evident using other metrics (corresponding positive predictive value decreased from 0.99 to 0.14).
Increasing reliance on algorithmic risk scores in consequential decision-making processes raises critical fairness and ethical concerns. This paper provides broad guidance for analytic strategies that clinical investigators can use to remedy the impacts of outcome class imbalance on risk prediction tools.
This article is protected by copyright. All rights reserved.