We increase single-photon LiDAR capabilities via a hardware-accelerating inpainting transformer model. This model reconstructs all non-observed information within the image plane as it communicates with the beam steering hardware. We apply this to 3D time-of-flight (ToF) reconstruction, where objects obstruct each other’s line of sight. We use ToF histograms to distinguish objects within either the foreground and background, and their overlap will be treated as the dynamic mask for the model to reconstruct. We also employ this to unorthodox scanning patterns such as Lissajous and spiral, which are riddled with sparsity. Lastly, we are developing an AI MEMs system, which intelligently downsamples the image plane based off foreground masks, combating sampling redundancy. We believe that our approach will be useful in applications for imaging and sensing dynamic targets with sparse single-photon data across all domains.
|