Presentation + Paper
21 April 2020 Repairing highly corrupted speech and images with U-net autoencoders
Author Affiliations +
Abstract
Recovering data from high amounts of loss and corruption would be useful for a wide variety of civilian and military applications. Highly corrupted data (e.g., speech and images) has been less studied relative to the problem of light corruption, but would be advantageous for applications such as low-light imagery and weak signal reception in acoustic sensing and radio communication. Unlike milder signal corruptions, resolving strong noise interference may require a more robust approach than simply removing predictable noise, namely actively looking for the expected signal, a type of problem well suited for machine learning. In this work, we evaluate a variant of the U-net autoencoder neural network topology for accomplishing the difficult task of denoising highly corrupted images and English speech when noise floors are 2-10x stronger than the clean signal. We test our methods on corruptions including additive white Gaussian noise and channel dropout.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Michael S. Lee, John S. Hyatt, and Samuel N. Edwards "Repairing highly corrupted speech and images with U-net autoencoders", Proc. SPIE 11413, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114130E (21 April 2020); https://doi.org/10.1117/12.2552796
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Denoising

Interference (communication)

Convolution

RGB color model

Machine learning

Image denoising

Image quality

Back to Top