June 1, 2023


Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Can’t Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. (arXiv:2201.07513v2 [cs.CR] UPDATED)

Self-supervised representation learning techniques have been developing
rapidly to make full use of unlabeled images. They encode images into rich
features that are oblivious to downstream tasks. Behind their revolutionary
representation power, the requirements for dedicated model designs and a
massive amount of computation resources expose image encoders to the risks of
potential model stealing attacks – a cheap way to mimic the well-trained
encoder performance while circumventing the demanding requirements. Yet
conventional attacks only target supervised classifiers given their predicted
labels and/or posteriors, which leaves the vulnerability of unsupervised
encoders unexplored.

In this paper, we first instantiate the conventional stealing attacks against
encoders and demonstrate their severer vulnerability compared with downstream
classifiers. To better leverage the rich representation of encoders, we further
propose Cont-Steal, a contrastive-learning-based attack, and validate its
improved stealing effectiveness in various experiment settings. As a takeaway,
we appeal to our community’s attention to the intellectual property protection
of representation learning techniques, especially to the defenses against
encoder stealing attacks like ours.