浅井 裕希
With the development of AI technology in recent years, computationally demanding processing has been increasing. Split computing (SC), in which a neural network is divided into a lightweight part that can be processed on the device side and a heavyweight part that is processed in the cloud, has been attracting attention. SC has not only the merit of split processing but also the merit of privacy protection in that it transmits intermediate features instead of raw data. On the other hand, from the perspective of model inversion attacks (MIA), in which the input layer is inferred from the output layer of the model, we speculate that inferring the input layer from intermediate features may increase privacy vulnerability more than conventional MIA, which infers the input layer from the output layer of the learning model. Existing MIAs for SC environments only state that the attack is successful only under certain conditions, such as the fact that the attack target model is a white box, and do not discuss privacy vulnerabilities of using SCs. This study investigates privacy vulnerabilities and discusses the dangers of MIA in an SC environment, where the input layer is inferred from the output layer. The study examines whether privacy vulnerability increases when the model under attack is a black box and auxiliary datasets are available, by comparing the accuracy of inferring the input from the intermediate features at each layer with the accuracy of inferring from the output layer.