Learning to Safely Approve Updates to Machine Learning Algorithms

Jean Feng (University of California, San Francisco)

View paper in the ACM journal

Abstract: Machine learning algorithms in healthcare have the potential to continually learn from real-world data generated during healthcare delivery and adapt to dataset shifts. As such, regulatory bodies like the US FDA have begun discussions on how to autonomously approve modifications to algorithms. Current proposals evaluate algorithmic modifications via hypothesis testing. However, these methods are only able to define and control the online error rate if the data is stationary over time, which is unlikely to hold in practice. In this manuscript, we investigate designing approval policies for modifications to ML algorithms in the presence of distributional shifts. Our key observation is that the approval policy that is most efficient at identifying and approving beneficial modifications varies across different problem settings. So rather than selecting fixed approval policy a priori, we propose learning the best approval policy by searching over a family of approval strategies. We define a family of strategies that range in their level of optimism when approving modifications. This family includes the pessimistic strategy that, in fact, rescinds approval, which is necessary when no version of the ML algorithm performs well. We use the exponentially weighted averaging forecaster (EWAF) to learn the most appropriate strategy and derive tighter regret bounds assuming the distributional shifts are bounded. In simulation studies and empirical analyses, we find that wrapping approval strategies within EWAF algorithm is a simple yet effective strategy that can help protect against distributional shifts without significantly slowing down approval of beneficial modifications.

If the Livestream seems inaccessible, please try refreshing your browser. Clicking the "LIVE" button ensures you are in sync with the live content.