The sound signal of power transformers is a convenient and reliable data source for determining machine faults and can be used to monitor the working status in real-time. There are several deep learning methods trained with labeled data for anomalous sound detection, which can identify a limited number of fault types. However, in real scenes, the large number of unlabeled anomalous sounds makes it difficult for existing methods to detect accurately. To solve the above challenges, this paper designs a self-training learning algorithm for anomalous sound detection for supercomputing platforms and expands the training datasets with pseudo-labeled data to enhance the migration capability of the detection algorithm in complex scenes. The performance metrics are evaluated based on the MIMII DG dataset and the local dataset from the DCASE 2022 task II. The results show that the baseline model with the self-training algorithm improves the AUC significantly over the initial baseline model across scenarios. At the same time, relying on the powerful computing power of the supercomputing platform, the iteration time of model self-training learning can be further compressed, which can quickly get the detection model adapted to new scenes.
|