In a backdoor attack on a machine learning model, an adversary produces a
model that performs well on normal inputs but outputs targeted
misclassifications on inputs containing a small trigger pattern. Model
compression is a widely-used approach for reducing the size of deep learning
models without much accuracy loss, enabling resource-hungry models to be
compressed for use on resource-constrained devices. In this paper, we study the
risk that model compression could provide an opportunity for adversaries to
inject stealthy backdoors. We design stealthy backdoor attacks such that the
full-sized model released by adversaries appears to be free from backdoors
(even when tested using state-of-the-art techniques), but when the model is
compressed it exhibits highly effective backdoors. We show this can be done for
two common model compression techniques — model pruning and model
quantization. Our findings demonstrate how an adversary may be able to hide a
backdoor as a compression artifact, and show the importance of performing
security tests on the models that will actually be deployed not their
precompressed version.

360 Mobile Vision - 360mobilevision.com North & South Carolina Security products and Systems Installations for Commercial and Residential - $55 Hourly Rate. ACCESS CONTROL, INTRUSION ALARM, ACCESS CONTROLLED GATES, INTERCOMS AND CCTV INSTALL OR REPAIR 360 Mobile Vision - 360mobilevision.com is committed to excellence in every aspect of our business. We uphold a standard of integrity bound by fairness, honesty and personal responsibility. Our distinction is the quality of service we bring to our customers. Accurate knowledge of our trade combined with ability is what makes us true professionals. Above all, we are watchful of our customers interests, and make their concerns the basis of our business.

By admin