News
New Paper in the Journal ACM Transactions on Computing in Healthcare published: Reward Systems for Trustworthy Medical Federated Learning Published
Federated learning enables institutions to collaboratively train machine learning models without sharing raw patient data, an approach that is especially valuable in sensitive domains such as medicine. However, while FL helps protect privacy, it can still suffer from bias, leading to unfair treatment of specific patient subgroups. Existing incentive systems in FL typically reward only predictive performance, which can unintentionally reinforce these biases at an institutional level.
Title: “Reward Systems for Trustworthy Medical Federated Learning”
In the paper, Konstantin Pandl, Florian Leiser, Scott Thiebes, and Ali Sunyaev propose novel reward systems that go beyond performance alone by also measuring and incentivizing contributions to fairness. We evaluate our approach using multiple chest X-ray datasets, with particular focus on sex- and age-related subgroups. Our findings demonstrate that institutions can indeed be rewarded for reducing bias while maintaining high predictive performance. Moreover, we show that such reward mechanisms can encourage better data quality, as illustrated in our label flip experiments.
This research offers a concrete step toward developing more trustworthy, fair, and effective federated learning models in healthcare.
Pandl, K. D., Leiser, F., Thiebes, S., & Sunyaev, A. (2023). Reward systems for trustworthy medical federated learning. ACM Transactions on Computing for Healthcare.
https://doi.org/10.1145/3761821
