paper:《Efficient Differentially Private Secure Aggregation for Federated Learning via Hardness of Learning with Errors》
abstract:Federated machine learning leverages edge computing to develop models fromnetwork user data, but privacy in federated learning remains a major challenge.Techniques using differential privacy have been proposed to address this, butbring their own challenges -- many require a trusted third party or else addtoo much noise to produce useful models. Recent advances in \emph{secureaggregation} using multiparty computation eliminate the need for a third party,but are computationally expensive especially at scale. We present a newfederated learning protocol that leverages a novel differentially private,malicious secure aggregation protocol based on techniques from Learning WithErrors. Our protocol outperforms current state-of-the art techniques, andempirical results show that it scales to a large number of parties, withoptimal accuracy for any differentially private federated learning scheme.