【组会分享】《Gaussian Differential Privacy》

1878
0
2022-07-30 22:10:59
正在缓冲...
30
24
39
8
《Gaussian Differential Privacy》—— 一种基于假设的新的差分隐私定义 Abstract:Differential privacy has seen remarkable success as a rigorous and practical formalization of data privacy in the past decade. This privacy definition and its divergence based relaxations, however, have several acknowledged weaknesses, either in handling composition of private algorithms or in analyzing important primitives like privacy amplification by subsampling. Inspired by the hypothesis testing formulation of privacy, this paper proposes a new relaxation, which we term `f-differential privacy' (f-DP). This notion of privacy has a number of appealing properties and, in particular, avoids difficulties associated with divergence based relaxations. First, f-DP preserves the hypothesis testing interpretation. In addition, f-DP allows for lossless reasoning about composition in an algebraic fashion. Moreover, we provide a powerful technique to import existing results proven for original DP to f-DP and, as an application, obtain a simple subsampling theorem for f-DP. In addition to the above findings, we introduce a canonical single-parameter family of privacy notions within the f-DP class that is referred to as `Gaussian differential privacy' (GDP), defined based on testing two shifted Gaussians. GDP is focal among the f-DP class because of a central limit theorem we prove. More precisely, the privacy guarantees of \emph{any} hypothesis testing based definition of privacy (including original DP) converges to GDP in the limit under composition. The CLT also yields a computationally inexpensive tool for analyzing the exact composition of private algorithms. Taken together, this collection of attractive properties render f-DP a mathematically coherent, analytically tractable, and versatile framework for private data analysis. Finally, we demonstrate the use of the tools we develop by giving an improved privacy analysis of noisy stochastic gradient descent.
CS Phd at Stevens. https://jefffffffu.github.io/
自动连播
6.1万播放
简介
【论文分享】《deep learning with differential privacy》~Moments Accoutant的关键思想
52:00
【教材分享交流】《Differential Privacy From Theory to Practice》-chapter1、chapter2
01:02:55
【组会论文记录】《User-Level Privacy-Preserving Federated Learning: Analysis and Perform》
42:42
【论文分享】《renyi differential privacy》-瑞丽差分隐私
27:59
【组会汇报】差分隐私-《PATESEMI-SUPERVISED KNOWLEDGE TRANSFER FOR DEEP LEARNING》(PATE)
47:16
【组会分享】自适应差分隐私深度学习-《An Adaptive and Fast Convergent Approach to DP DL 》
50:42
【教材分享】拉普拉斯机制?高斯机制?严格差分隐私?松弛差分隐私?
18:20
【组会汇报】《Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy》
01:15:37
【论文分享】特邀西安电子科技大学博士分享《Local and Central DP for Robustness and Privacy in FL》
01:10:51
【论文讨论】《Towards Security Threats of Deep Learning Systems: A Survey》
30:50
【组会分享】《The Privacy Blanket of the Shuffle Model》~隐私毯子
34:55
【论文讨论】《Learning Differenitally Private Languange Model》~Client Level FL-DP
45:19
【论文讨论】《Distributed Gaussian Differentially Privacy Via Shuffing》
22:32
【组会分享】《Gaussian Differential Privacy》
37:31
【论文分享】《Locally Differentially Private Protocols for Frequency Estimation》
55:37
【学习分享】《本地化差分隐私综述》—LDP
02:14:32
【论文汇报】特邀浙江大学博士冯浩哲(知乎大V“捡到一束光”)分享《KD3A: 一种满足隐私保护要求的去中心化无监督域适应范式》[ICML2021]
31:25
客服
顶部
赛事库 课堂 2021拜年纪