News
New Article on Privacy in Collaborative Deep Learning Systems accepted for publication in ACM Computing Surveys

How can multiple parties train and use neural networks together - without exposing their private data? Collaborative deep learning systems allow organizations and individuals to pool resources for training and inference, but they also introduce serious privacy risks. Privacy-preserving techniques such as differential privacy, homomorphic encryption, secure multi-party computation, and trusted execution environments can help, yet the landscape of system designs integrating these techniques is vast and fragmented across research communities. In this article, we bring order to this complexity: based on 122 publications describing 149 systems, we present a privacy-focused taxonomy with 16 dimensions and 49 characteristics, along with four system archetypes - autonomy, delegation, dyadic, and supervision - that capture the most prevalent designs and their use of privacy-preserving techniques. Our work provides researchers and practitioners with a foundation for consistently describing, comparing, and selecting collaborative deep learning systems for their use cases.
The article is available via open access at: https://dl.acm.org/doi/10.1145/3801094