Presentation + Paper
7 June 2024 A security analysis of compressed communication in distributed neural networks
Tejas Kannan, Sabrina Dimassimo, Rachel Haga
Author Affiliations +
Abstract
Deep distributed neural networks (DDNNs) use partitioning and data compression to perform neural network inference under the tight resource constraints of edge computing systems. Existing DDNN applications focus on efficient execution without accounting for how these features impact data privacy. In this work, we develop a side-channel attack that exploits the use of compressed communication in DDNN systems. We demonstrate how the size of compressed messages provides information about the DDNN’s results, even when the system uses data encryption. We demonstrate this side-channel by developing a probabilistic attack that uses message sizes to infer the DDNN’s results with over 2.3× the accuracy of random guessing. In the worst case, our attack discovers over 90% of the DDNN’s outputs. We mitigate this side-channel through a novel defense called dropout stable compression (DRSC), which guarantees fixed-length messages for DDNNs. DRSC acts as a wrapper around lossless compression and delivers an overall compression ratio equal to that of the underlying lossless method. To achieve this behavior, DRSC controls the compressed size through Dropout.1 DRSC limits the impact of Dropout on the DDNN’s accuracy by dropping values with the smallest magnitude. This design allows DRSC to eliminate the discovered side-channel while displaying negligible overhead and an inference accuracy within 0.3 percentage points (i.e., 0.3%) of existing systems. With this behavior, DRSC enables resource constrained systems to gain the benefits of DDNNs without suffering from the privacy issues stemming from data compression.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Tejas Kannan, Sabrina Dimassimo, and Rachel Haga "A security analysis of compressed communication in distributed neural networks", Proc. SPIE 13054, Assurance and Security for AI-enabled Systems, 130540F (7 June 2024); https://doi.org/10.1117/12.3022435
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Education and training

Neural networks

Defense and security

Design

Telecommunications

Data compression

Computing systems

RELATED CONTENT


Back to Top