VisualCheXbert: Addressing the Discrepancy Between Radiology Report Labels and Image Labels

Saahil Jain and Akshay Smit (Stanford University) , Steven QH Truong, Chanh DT Nguyen, and Minh-Thanh Huynh (VinBrain) , Mudit Jain (unaffiliated) , Victoria A. Young, Andrew Y. Ng, Matthew P. Lungren, and Pranav Rajpurkar (Stanford University)

View paper in the ACM journal

Abstract: Automatic extraction of medical conditions from free-text radiology reports is critical for supervising computer vision models to interpret medical images. In this work, we show that radiologists labeling reports significantly disagree with radiologists labeling corresponding chest X-ray images, which reduces the quality of report labels as proxies for image labels. We develop and evaluate methods to produce labels from radiology reports that have better agreement with radiologists labeling images. Our best performing method, called VisualCheXbert, uses a biomedically-pretrained BERT model to directly map from a radiology report to the image labels, with a supervisory signal determined by a computer vision model trained to detect medical conditions from chest X-ray images. We find that VisualCheXbert outperforms an approach using an existing radiology report labeler by an average F1 score of 0.14 (95% CI 0.12, 0.17). We also find that VisualCheXbert better agrees with radiologists labeling chest X-ray images than do radiologists labeling the corresponding radiology reports by an average F1 score across several medical conditions of between 0.12 (95% CI 0.09, 0.15) and 0.21 (95% CI 0.18, 0.24).

If the Livestream seems inaccessible, please try refreshing your browser. Clicking the "LIVE" button ensures you are in sync with the live content.