CFP Track 3: Policy

CHIL CFP Track 3, Policy: Impact, Economics, and Society

Track Chairs: Dr. Laura Rosella, Dr. Ziad Obermeyer, Dr. Avi Goldfarb, Dr. Tom Pollard, Dr. Rajesh Ranganath, Dr. Rumi Chunara


Algorithms do not exist in the digital world alone: indeed, they often explicitly take aim at important social outcomes. This track considers issues at the intersection of algorithms and the societies they seek to impact. This track welcomes theoretical, methodological, and applied contributions for understanding and accounting for fairness, accountability, and transparency of algorithmic systems and for societal applications including mitigating discrimination, inequality, public health, health systems, policy applications, and other societal impacts from the deployment of such systems in real-world contexts. Given the societal implications of this area of focus, it includes work using data that fall out of traditional clinical data streams and includes health and non-health data sources including demographic data, online data streams, environmental and climate data. This includes the development of machine learning methods relevant to policy and public health, or new methods for working with data related for broader societal applications. 

We welcome papers from various sub-disciplines (see list below). Paper submissions must indicate at least one area of interest (see list below) and at least one sub-discipline upon abstract registration.

Methods for combining non-clinical and clinical data; Understanding includes detecting and measuring how and which forms of bias are manifested in datasets and models; determining how algorithmic systems may introduce, exacerbate, or reduce inequities, discrimination and unjust outcomes; measuring the efficacy of existing techniques for explaining and interpreting automated decisions; evaluating perceptions of fairness and algorithmic bias. Accounting includes the governance of the design, development and deployment of algorithmic systems, which takes into consideration all stakeholders and interactions with socio-technical systems. Development of multi-level machine learning models (e.g. combining individual and population-level information); Mitigating includes introducing techniques for data collection and analysis and processing that measure, incorporate and acknowledge the selection bias and discrimination that may be present in datasets and models; formalizing fairness objectives based on notions from the social sciences, law, and humanistic studies; building socio-technical systems which incorporate these insights to minimize harm on historically disadvantaged communities and empower them; introducing methods for decision validation, correction and participation in co-designing algorithmic systems. Methods for generating spatial or temporal features relevant to health from noisy point observations.


  1. System design for implementation of ML at scale in healthcare: methods and techniques for evaluating computer systems within an existing regulatory framework, methods for establishing new regulatory guidelines, and tools for enabling adoption of ML within large healthcare organizations. Examples include evaluation of black-boxes and bias compared to a legal standard, and complementary intangible capital including training and processes.
  2. Methods to audit, measure, and evaluate fairness and bias: methods and techniques to check and measure the fairness (or unfairness) of existing computing systems and to assess associated risks. Examples include metrics and formal testing procedures to evaluate fairness, quantify the risk of fairness violations, or explicitly show tradeoffs.
  3. Examination of public health and policy and implications of machine learning in existing computing systems. Examples include the explanation of black-boxes, counterfactual and what-if reasoning.
  4. Methods for combining non-clinical and clinical data for population health applications.
  5. Methods involving human factors and humans-in-the-loop: methods and techniques that center on the human-machine relationship. Examples include visual analytics for fairness exploration, cognitive evaluation of explanations, and systems that combine human and algorithmic elements.

Example papers

Bolukbasi T, Chang K-W, Zou J, Saligrama V, Kalai A. 2016. Quantifying and reducing stereotypes in word embeddings. arXiv:1606.06121 [cs.CL]

Chu KH, Colditz J, Malik M, Yates T, Primack B Identifying Key Target Audiences for Public Health Campaigns: Leveraging Machine Learning in the Case of Hookah Tobacco Smoking J Med Internet Res 2019;21(7):e12443

Dranove, David, Chris Forman, Avi Goldfarb, and Shane Greenstein. 2014. “The Trillion Dollar Conundrum: Complementarities and Health Information Technology.” American Economic Journal: Economic Policy, 6 (4): 239-70.

Neill, D.B., Moore, A.W., Sabhnani, M. and Daniel, K., 2005, August. Detection of emerging space-time clusters. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining (pp. 218-227). ACM.