Paper
26 June 2023 ECOPY: Data-free model stealing for deep neural network
Weilun Qin, Zijiao Zhang, Yufei Xie, Lin Zhu
Author Affiliations +
Proceedings Volume 12714, International Conference on Computer Network Security and Software Engineering (CNSSE 2023); 127140L (2023) https://doi.org/10.1117/12.2683207
Event: Third International Conference on Computer Network Security and Software Engineering (CNSSE 2023), 2023, Sanya, China
Abstract
Deep neural networks are widely used increasingly in the real world, such as machine learning as a service (MLaas) which makes the owner of the model can deploy their fully trained model in the cloud for others to use. By paying remuneration and inputting the object into the model, customers can receive the predictions. Unfortunately, these network models are often threatened by model stealing attack. This attack uses the predictive information given by the original model to train a surrogate model with similar functions, so as to achieve the purpose of infringing the intellectual property of the model owner and the white-box adversarial sample attack against the clone model can transfer to the original model to a certain extent, which will result in more far-reaching damage and impact. But the model stealing process is often ineffective because the training data or related agent datasets of the original model cannot be obtained, for such data is often hidden by the model owner due to its privacy. On this basis, the model stealing attack without data has gradually entered people's view. The data generator will generate data to query the model to achieve the purpose of stealing. In this paper, a data-free model stealing attack frame based on gradient prediction and uniform sample generation has been proposed. Experiments show that this method can obtain a high imitation accuracy in a given query cost range. The performance (0.94x-0.99x) of the stand-in model is better than other MS attacks that rely on part of the original data set (such as JBDA, an attack that uses a part of the original data and other synthetic data generated from the original data by Jacobi data enhancement method to train the clone model, clone accuracy 0.13x-0.69x) or rely on the proxy data set (such as KnockoffNets which uses the surrogate data which has the homologous distribution with the original data to train the clone model, clone accuracy 0.52x-0.97x). At the same time, compared with other data-free MS attacks proposed recently (such as DFME, the SOTA method that uses the data randomly generated by generator model to train the clone model, clone accuracy 0.92x-0.99x), it also has the same or even higher performance. At the same time, the rate of success for adversarial examples attack through the surrogate model is also better than other data-free black-box attack.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Weilun Qin, Zijiao Zhang, Yufei Xie, and Lin Zhu "ECOPY: Data-free model stealing for deep neural network", Proc. SPIE 12714, International Conference on Computer Network Security and Software Engineering (CNSSE 2023), 127140L (26 June 2023); https://doi.org/10.1117/12.2683207
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Education and training

Statistical modeling

Performance modeling

Homogenization

Adversarial training

Gallium nitride

Back to Top