The ACM Conference on Health, Inference, and Learning (CHIL) solicits work across a variety of disciplines, including machine learning, statistics, epidemiology, health policy, operations, and economics. ACM-CHIL 2021 invites submissions touching on topics focused on relevant problems affecting health. Authors can submit papers to any of the three tracks. See below for specific guidelines on formatting and submitting your paper.
To ensure that all submissions to ACM-CHIL are reviewed by a knowledgeable and appropriate set of reviewers, the conference is divided into tracks and areas of interest. Authors will select exactly one primary track and area of interest when they register their submissions, in addition to one or more sub-disciplines.
Track Chairs will oversee the reviewing process. In case you are not sure which track your submission fits under, feel free to contact the Track or Proceedings Chairs for clarification. The Proceedings Chairs reserve the right to move submissions between tracks and/or areas of interest if the Proceedings Chairs believe that a submission has been misclassified.
UPDATE: We are extending the abstract and paper submission deadlines by 3 days. Abstracts are recommended by January 10th, and submissions are due by January 14th, both 11:59 pm AoE. In addition, we have changed the abstract deadline to be a recommendation rather than a requirement -- submitting an abstract by this deadline is no longer required to submit a full paper.
- Abstracts due – January 10, 2021
- Submissions due – January 14, 2021 (11:59 pm AoE)
- Notification of Acceptance – February 15, 2021 (11:59 pm AoE)
- Camera Ready Due – March 5, 2021 (11:59 pm AoE)
- Conference Date – April 8-10, 2021
- Track 1: Models and Methods
- Track 2: Applications and Practice
- Track 3: Policy: Impact and Society
These are called topics in the submission form. Authors should select one or more discipline(s) in machine learning for health (ML4H) from the following list when submitting their paper: benchmark datasets, distribution shift, transfer learning, population health, social networks, scalable ML4H systems, natural language processing (NLP), computer vision, time series, bias/fairness, causality, *-omics, wearable-data, etc. Peer reviewers are assigned according to expertise in the sub-discipline(s) selected, so please choose your relevant topics carefully.
Works submitted to ACM-CHIL will be reviewed by 3 reviewers within the broader field of machine learning for healthcare. Reviewers will be asked to primarily judge the work according to five criteria:
Relevance: All submissions to ACM-CHIL are expected to be relevant to health. Concretely, this means that the problem is well-placed into the relevant themes for the conference. We will instruct reviewers to gauge whether submissions are best suited for this track, or should be moved elsewhere. Track chairs reserve the right to change a paper’s track at any point if they feel that it is a more suitable submission in a different track.
Quality: Is the submission technically sound? Are claims well supported by theoretical analysis or experimental results? Is this a complete piece of work or work in progress? Are the authors careful and honest about evaluating both the strengths and weaknesses of their work?
Originality: Are the tasks or methods new? Is the work a novel combination of well-known techniques? Is it clear how this work differs from previous contributions? Is related work adequately cited?
Clarity: Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note: a superbly written paper provides enough information for an expert reader to reproduce its results.)
Significance: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?
Authors will be given an opportunity to respond to reviews with a brief comment (maximum of 250 words for a paper; violations of this word limit may result in the paper being desk-rejected). Final paper acceptance/rejection decisions will be made accounting for the reviewers' overall judgment, their subjective ratings of confidence/expertise, the author response (if there is one), and our own editorial judgment.
Submission Format and Guidelines
Submission Site Submissions should be made via the online submission system. At least one author of each accepted paper is required to register for, attend, and present the work at the conference in order for the paper to appear in the conference proceedings in the ACM Digital Library.
Length and Formatting Submitted papers must be no longer than 10 pages (excluding references and appendices). The recommended length is 7-10 pages. Additional supplementary materials (e.g., appendices) can be submitted with their main manuscript. Reviewers will not be required to read the supplementary materials.
Papers should be formatted using the ACM Master Article Template and the reference format indicated therein. For LaTeX users, choose format=sigconf. An overleaf template is included here. ACM also makes a Word template available. Authors do not need to include terms, keywords, or other front matter in their submissions. Papers that are neither in ACM format or exceeding the specified page length, may be rejected without review.
Archival Submissions Submissions to the main conference are considered archival and will appear in the published proceedings of the conference if accepted. Author notification of acceptance will be provided towards mid-February 2021.
The review process is double-blind. Please submit completely anonymized drafts. Please do not include any identifying information, and refrain from citing the authors’ own prior work in anything other than third-person. Violations to this policy may result in rejection without review.
Conference organizers and reviewers are required to maintain confidentiality of submitted material. Upon acceptance, the titles, authorship, and abstracts of papers will be released prior to the conference.
For accepted papers, authors will need to provide the following camera-ready materials by March 5:
- Metadata for the eRights system
- Submit forms for approval
- Final versions of papers
- Dual Submission Policy
You may not submit papers that are identical, or substantially similar to versions that are currently under review at another conference or journal, have been previously published, or have been accepted for publication. Submissions to the main conference are considered archival and will appear in the published proceedings of the conference if accepted.
An exception to this rule is extensions of workshop papers that have previously appeared in non-archival venues, such as workshops, arXiv, or similar without formal proceedings. These works may be submitted as-is or in an extended form. ACM-CHIL also welcomes full paper submission that extend previously published short papers or abstracts, so long as the previously published version does not exceed 4 pages in length. Note that the submission should not cite the workshop/report and preserve anonymity in the submitted manuscript.
ACM-CHIL is committed to open science and ensuring our proceedings are freely available. The conference will make use of the ‘ACM Authorizer “Open Access” Service’ and ‘ACM OpenTOC Service’, allowing unrestricted access to individual papers as well as the overall proceedings, see here for more details.
ACM-CHIL abides by ethics guidelines provided here: ACM Ethics guidelines.
- Dr. Michael Hughes
- Dr. Shalmali Joshi
- Dr. Rajesh Ranganath
- Dr. Rahul G. Krishnan
Advances in machine learning are critical for a better understanding of health. This track seeks contributions in modeling, inference, and estimation in health-focused or health-inspired settings. We welcome submissions that develop novel methods and algorithms, introduce relevant machine learning tasks and baselines, identify challenges with prevalent approaches, or suggest new evaluation metrics for assessing algorithmic advances. In addition, we welcome new algorithmic techniques for combining non-clinical and clinical data for public and population health applications, algorithms for public health goals, and causal inference in public health settings.
While submissions should address problems relevant to health, the contributions themselves are not required to be directly applied to health. For example, authors may use synthetic datasets and experiments to demonstrate the properties of algorithms.
We welcome submissions that address research questions within any subfield of machine learning, broadly construed. The list of subfields below represent some areas of relevance we have identified in advance, but we do welcome submissions from other subfields as well.
- Supervised learning
- Semi-supervised learning
- Few-shot learning
- Federated learning
- Unsupervised learning
- Transfer learning
- Domain adaptation and generalization
- Representation learning
- Causal inference
- Survival analysis
- Reinforcement learning
- Algorithmic fairness
- Deep learning
- Bayesian methods
- Structured learning
- Adversarial learning
- Robust statistics
- Distribution shift
- Computer vision
- Natural Language Processing
- Electronic Health Record data
- Spatio-temporal data
- Claims data
- Social determinants of health
- Knowledge graphs
- Mobile health
Shalit, Uri, Fredrik D. Johansson, and David Sontag. "Estimating individual treatment effect: generalization bounds and algorithms." Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.
Choi, Edward, et al. "MiME: Multilevel medical embedding of electronic health records for predictive healthcare." Advances in Neural Information Processing Systems. 2018.
McDermott, Matthew BA, et al. "Semi-supervised biomedical translation with cycle Wasserstein regression GANs." Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
Futoma, Joseph, Sanjay Hariharan, and Katherine Heller. "Learning to detect sepsis with a multitask Gaussian process RNN classifier." Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.
Janizek, Joseph D., et al. "An adversarial approach for the robust classification of pneumonia from chest radiographs." Proceedings of the ACM Conference on Health, Inference, and Learning 2020.
Mate, Aditya, et al. "Collapsing Bandits and Their Application to Public Health Interventions." Advances in Neural Information Processing Systems 2020.
- Dr. Tom Pollard
- Dr. Bobak Mortazavi
- Dr. Andrew Beam
- Dr. Uri Shalit
The goal of this track is to highlight works applying robust methods, models, or practices to identify, characterize, audit, evaluate, or benchmark machine learning systems in applied healthcare settings. These include examples of systems deployed in practice, and datasets used to empirically evaluate these systems.
Areas of Interest
All areas of machine learning and all types of data within healthcare are relevant to this track. An example set of topics of interest and exemplar papers are shown below. These examples are by no means exhaustive and are meant as illustration and motivation. Submit your work here if the contribution is one of the following:
- Focused on solving a carefully motivated problem grounded in an application,
- Focused on a deployment of a system.
- Describes data or software packages.
Introducing a new method is not prohibited by any means for this track, but the focus should be on methods which are designed to work in a robust manner in real-world applications (e.g., fail gracefully in practice), work that highlights approaches that scale particularly well either in terms of computational efficiency or data required, and work that succeeds across real-world data modalities and systems. In other words, we want compelling demonstrations of systems that address real world problems in healthcare.These include careful examinations of ML systems on real-world data, comparison of performance in cohort analysis, challenges in application development, tools for dataset shift, adversarial shift, personalization, and models on remote and wearable health.
This track also welcomes submissions of significant computer software which support healthcare research and applications. Submissions should describe the intended use for the software, justify the need for the software, provide executable examples for other researchers, and adhere to best practices in software development where possible, including the use of unit tests, continuous integration, and diligent documentation of component design and purpose. Software submissions should directly support a healthcare application. All computer software submissions must be open source and released under a suitable open source license.
Careful examinations of the robustness of ML systems to real-world dataset shift, adversarial shift, or on minority subpopulations.
- Nestor, Bret, et al. “Feature Robustness in Non-Stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks.” Proceedings of Machine Learning for Healthcare 2019 (MLHC ’19), 2019, https://www.mlforhc.org/s/Nestor.pdf.
- Finlayson, Samuel G., et al. "Adversarial attacks on medical machine learning." Science 363.6433 (2019): 1287-1289.
Investigations into model performance on minority subpopulations, and the implications thereof.
- Boag, Willie, et al. "Racial Disparities and Mistrust in End-of-Life Care." Machine Learning for Healthcare Conference. 2018. https://www.mlforhc.org/s/2.pdf
- Chen, Irene Y., Peter Szolovits, and Marzyeh Ghassemi. "Can AI Help Reduce Disparities in General Medical and Mental Health Care?." AMA journal of ethics 21.2 (2019): 167-179. https://journalofethics.ama-assn.org/article/can-ai-help-reduce-disparities-general-medical-and-mental-health-care/2019-02
Scalable, safe machine learning / inference in clinical environments
- Henderson, Jette, et al. "Phenotype instance verification and evaluation tool (PIVET): A scaled phenotype evidence generation framework using web-based medical literature." Journal of medical Internet research 20.5 (2018): e164.
New tools or comprehensive benchmarks for machine learning for healthcare.
- Wang, Shirly, et al. "MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III." Machine Learning for Healthcare, 2019.
Development of Scalable Systems for Processing Data in Practice (demonstrating, e.g., concern for multi-modality, runtime, robustness, etc., as guided by a clinical use case):
- Xu, Yanbo, et al. "Raim: Recurrent attentive and intensive model of multimodal patient monitoring data." Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018.
Bridging the deployment gap
- Tonekaboni, Sana, et al. "What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use." Machine Learning for Healthcare (2019)
Remote, Wearable, Telehealth, Public Health
- Wei Q, Wang Z, Hong H, Chi Z, Feng DD, Grunstein R, Gordon C. A Residual based Attention Model for EEG based Sleep Staging. IEEE Journal of Biomedical and Health Informatics. 2020 Mar 3.
- Pollard TJ, Johnson AE, Raffa JD, Mark RG. tableone: An open source Python package for producing summary statistics for research papers. JAMIA Open. 2018 May 23;1(1):26-31.
- Johnson AE, Stone DJ, Celi LA, Pollard TJ. The MIMIC Code Repository: enabling reproducibility in critical care research. Journal of the American Medical Informatics Association. 2017 Sep 27;25(1):32-9.
- Peng Y, Wang X, Lu L, Bagheri M, Summers R, Lu Z. NegBio: a high-performance tool for negation and uncertainty detection in radiology reports. AMIA Summits on Translational Science Proceedings. 2018;2018:188.
- Dr. Alistair Johnston
- Dr. Rumi Chunara
- Dr. George Chen
Algorithms do not exist in the digital world alone: indeed, they often explicitly take aim at important social outcomes. This track considers issues at the intersection of algorithms and the societies they seek to impact, specifically with respect to health. Submissions could include methodological contributions such as algorithmic development and performance evaluation for policy and public health applications, combining clinical and non-clinical data, as well as detecting and measuring bias. Submissions could also include impact-oriented research such as determining how algorithmic systems for health may introduce, exacerbate, or reduce inequities and inequalities, discrimination, and unjust outcomes, as well as evaluating the economic implications of these systems. Submissions related to understanding barriers to deployment and adoption of algorithmic systems for societal-level health applications are also of interest. In addressing these problems, insights from social sciences, law, clinical medicine, and the humanities can be crucial.
We welcome papers from various areas of interest (see list below) and at least one sub-discipline upon abstract registration.
Areas of interest
- Fairness, equity, ethics and justice
- Model implementation, deployment, and adoption
- Policy, public health, and societal impact of algorithms
- System design for implementation of ML at scale
- Regulatory frameworks
- Tools for adoption of ML
- Evaluation of bias in legal and/or health contexts
- Human-algorithm interaction
Bolukbasi T, Chang K-W, Zou J, Saligrama V, Kalai A. 2016. Quantifying and reducing stereotypes in word embeddings. arXiv:1606.06121 [cs.CL]
Kleinberg, Jon, and Sendhil Mullainathan. "Simplicity creates inequity: implications for fairness, stereotypes, and interpretability." Proceedings of the 2019 ACM Conference on Economics and Computation. 2019.
Incorporating Interpretable Output Constraints in Bayesian Neural Networks Wanqian Yang, Lars Lorch, Moritz Gaule, Himabindu Lakkaraju, Finale Doshi-Velez. Advances in Neural Information Processing Systems (NeurIPS), 2020.
Bhatt, Umang, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. "Explainable machine learning in deployment." In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 648-657. 2020.