The Impact of Differential Privacy on Group Disparity Mitigation
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
The Impact of Differential Privacy on Group Disparity Mitigation. / Petren Bach Hansen, Victor; Tejaswi Neerkaje, Atula; Sawhney, Ramit; Flek, Lucie; Sogaard, Anders.
Proceedings of the Fourth Workshop on Privacy in Natural Language Processing. Association for Computational Linguistics, 2022.Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - The Impact of Differential Privacy on Group Disparity Mitigation
AU - Petren Bach Hansen, Victor
AU - Tejaswi Neerkaje, Atula
AU - Sawhney, Ramit
AU - Flek, Lucie
AU - Sogaard, Anders
PY - 2022
Y1 - 2022
N2 - The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In this paper, we evaluate the impact of differential privacy on fairness across four tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact Does privacy inhibit attempts to ensure fairness? To this end, we train epsilon, delta-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting but more interestingly, differential privacy reduces between-group performance differences in the robust setting. We explain this by reinterpreting differential privacy as regularization.
AB - The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In this paper, we evaluate the impact of differential privacy on fairness across four tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact Does privacy inhibit attempts to ensure fairness? To this end, we train epsilon, delta-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting but more interestingly, differential privacy reduces between-group performance differences in the robust setting. We explain this by reinterpreting differential privacy as regularization.
U2 - 10.18653/v1/2022.privatenlp-1.2
DO - 10.18653/v1/2022.privatenlp-1.2
M3 - Article in proceedings
BT - Proceedings of the Fourth Workshop on Privacy in Natural Language Processing
PB - Association for Computational Linguistics
T2 - 4th Workshop on Privacy in Natural Language Processing
Y2 - 1 July 2022 through 1 July 2022
ER -
ID: 341493148