Algorithmic fairness and privacy are essential pillars of trustworthy machine
learning. Fair machine learning aims at minimizing discrimination against
protected groups by, for example, imposing a constraint on models to equalize
their behavior across different groups. This can subsequently change the
influence of training data points on the fair model, in a disproportionate way.
We study how this can change the information leakage of the model about its
training data. We analyze the privacy risks of group fairness (e.g., equalized
odds) through the lens of membership inference attacks: inferring whether a
data point is used for training a model. We show that fairness comes at the
cost of privacy, and this cost is not distributed equally: the information
leakage of fair models increases significantly on the unprivileged subgroups,
which are the ones for whom we need fair learning. We show that the more biased
the training data is, the higher the privacy cost of achieving fairness for the
unprivileged subgroups will be. We provide comprehensive empirical analysis for
general machine learning algorithms.

360 Mobile Vision - 360mobilevision.com North & South Carolina Security products and Systems Installations for Commercial and Residential - $55 Hourly Rate. ACCESS CONTROL, INTRUSION ALARM, ACCESS CONTROLLED GATES, INTERCOMS AND CCTV INSTALL OR REPAIR 360 Mobile Vision - 360mobilevision.com is committed to excellence in every aspect of our business. We uphold a standard of integrity bound by fairness, honesty and personal responsibility. Our distinction is the quality of service we bring to our customers. Accurate knowledge of our trade combined with ability is what makes us true professionals. Above all, we are watchful of our customers interests, and make their concerns the basis of our business.

By admin