CFP Track 4: Practice

CHIL CFP Track 4, Practice: Deployments, Systems, and Datasets

Track Chairs: Dr. Leo Celi, Dr. Stephanie Hyland, Dr. Danielle Belgrave, Dr. Katherine Heller, Dr. Alistair Johnson

Description

The transformation of healthcare through computational approaches is dependent on understanding how to empirically evaluate these systems, widely sharing tools for conducting research, and publicly accessible data allowing fair comparison of methods. This track seeks descriptions of the implementation or evaluation of informatics-based studies, computer software which has direct utility for medical researchers, and new datasets which support healthcare research.

Informatics based studies should primarily focus on evaluating these systems in clinical practice. Examples include applications of predictive modeling [1], deployment of a clinical decision support system [2], or evaluation of the impact of digital user interface modifications on routine practice [3].

Computer software submissions should describe the intended use for the software, justify the need for the software, and provide executable examples for other researchers. Software submissions should directly support a healthcare application. Examples include code for summarizing the demographics of a study cohort [4], deriving meaningful clinical concepts from electronic health records [5], and natural language processing tools specifically designed for clinical text [6, 7]. All computer software submissions must be open source and released under a suitable open source license. Computer software should adhere to best practices in software development where possible, including the use of unit tests, continuous integration, and diligent documentation of component design and purpose [8].

Descriptions of databases to support biomedical or health research are welcome. Dataset publications should focus on helping others reuse the data, rather than demonstrating any new insights or techniques. Datasets should include a full detailed description including the methods used to collect the data, the structure of records in the data, technical analyses supporting the quality of the data, and executable code demonstrating the use of the data. In terms of scope, we welcome datasets both large and small, so long as there is potential for a direct healthcare application. Datasets should be publicly available in an appropriate data repository with reasonable mechanisms for providing external researchers with access. Examples of suitable data repositories include but are not limited to Dryad, FigShare, PhysioNet, Synapse, or a university established data repository.

Examples

Evaluation of deployed applications, systems and software

1. Corey KM, Kashyap S, Lorenzi E, Lagoo-Deenadayalan SA, Heller K, Whalen K, Balu S, Heflin MT, McDonald SR, Swaminathan M, Sendak M. Development and validation of machine learning models to identify high-risk surgical patients using automatically curated electronic health record data (Pythia): A retrospective, single-site study. PLoS medicine. 2018 Nov 27;15(11):e1002701.

2. Henry, Katharine, et al. “Can septic shock be identified early? Evaluating performance of A targeted real-time early warning score (TREWScore) for septic shock in a community hospital: global and subpopulation performance.” D15. CRITICAL CARE: DO WE HAVE A CRYSTAL BALL? PREDICTING CLINICAL DETERIORATION AND OUTCOME IN CRITICALLY ILL PATIENTS. American Thoracic Society, 2017. A7016-A7016.

Openly available computer software that supports healthcare research

3. Ghassemi M, Pushkarna M, Wexler J, Johnson J, Varghese P. ClinicalVis: Supporting Clinical Task-Focused Design Evaluation. arXiv preprint arXiv:1810.05798. 2018 Oct 13.

4. Pollard TJ, Johnson AE, Raffa JD, Mark RG. tableone: An open source Python package for producing summary statistics for research papers. JAMIA Open. 2018 May 23;1(1):26-31.

5. Johnson AE, Stone DJ, Celi LA, Pollard TJ. The MIMIC Code Repository: enabling reproducibility in critical care research. Journal of the American Medical Informatics Association. 2017 Sep 27;25(1):32-9.

6. Peng Y, Wang X, Lu L, Bagheri M, Summers R, Lu Z. NegBio: a high-performance tool for negation and uncertainty detection in radiology reports. AMIA Summits on Translational Science Proceedings. 2018;2018:188.

7. Wilson G, Aruliah DA, Brown CT, Hong NP, Davis M, Guy RT, Haddock SH, Huff KD, Mitchell IM, Plumbley MD, Waugh B. Best practices for scientific computing. PLoS biology. 2014 Jan 7;12(1):e1001745.

Publicly available medical, clinical, or otherwise health-related datasets

8. Irvin J, Rajpurkar P, Ko M, Yu Y, Ciurea-Ilcus S, Chute C, Marklund H, Haghgoo B, Ball R, Shpanskaya K, Seekins J. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. arXiv preprint arXiv:1901.07031. 2019 Jan 21.