In underwater exploration, Autonomous Underwater Vehicles (AUVs) face challenges due to the adverse effects of the aquatic environment on optical sensors, resulting in sub-optimal data acquisition. To overcome this, we propose a novel solution utilizing a Generative Adversarial Network (GAN) model. Rooted in the U-Net architecture, our model processes low-quality AUV camera feed, generating enhanced representations of the underwater scene. The discriminator focuses on evaluating current image patches, capturing high-frequency properties with fewer parameters, achieving a 15% improvement in model accuracy. This approach facilitates realtime preprocessing in visually-guided underwater robot autonomy pipelines, overcoming challenges associated with underwater visibility
|