Learning Representations for Prediction of Next Patient State
Taylor Killian, Jayakumar Subramanian, Mehdi Fatemi, Marzyeh Ghassemi
Abstract: Reinforcement Learning (RL) has recently been applied to several problems in healthcare, with a particular focus in offline learning in observational data. RL relies on the use of latent states that embed sequential observations in such a way that the embedding is sufficient to approximately predict the next observation. but the appropriate construction of such states in healthcare settings is an open question, as the variation in steady-state human physiology is poorly-understood. In this work, we evaluate several information encoding schemes for offline RL using data from electronic health records (EHR). We use observations from septic patients in the MIMIC-III intensive care unit dataset, and evaluate the predictive performance of four embedding approaches in two tasks: predicting the next observation, and predicting a ``k-step'' look ahead or roll out. Our experiments highlight that the best performing state representation learning approaches utilize higher dimension recurrent neural architectures, and demonstrate that incorporating additional context with the state representation when predicting the next observation.