Federated Multilingual Models for Medical Transcript Analysis

Andre Manoel* (Microsoft), Mirian Del Carmen Hipolito Garcia (Microsoft), Tal Baumel (Microsoft), Shize Su (Microsoft), Jialei Chen (Microsoft), Robert Sim (Microsoft), Dan Miller (Airbnb), Danny Karmon (Google), Dimitrios Dimitriadis (Amazon)

Abstract: Federated Learning (FL) is a machine learning approach that allows the model trainer to access more data samples by training across multiple decentralized data sources while enforcing data access constraints. Such trained models can achieve significantly higher performance beyond what can be done when trained on a single data source. In a FL setting, none of the training data is ever transmitted to any central location; i.e. sensitive data remains local and private. These characteristics make FL perfectly suited for applications in healthcare, where a variety of compliance constraints restrict how data may be handled. Despite these apparent benefits in compliance and privacy, certain scenarios such as heterogeneity of the local data distributions pose significant challenges for FL. Such challenges are even more pronounced in the case of a multilingual setting. This paper presents a FL system for pre-training a large-scale multi-lingual model suitable for fine-tuning on downstream tasks such as medical entity tagging. Our work represents one of the first such production-scale systems, capable of training across multiple highly heterogeneous data providers, and achieving levels of accuracy that could not be otherwise achieved by using central training with public data only. We also show that the global model performance can be further improved by a local training step.