Background objects obscured in some sub-apertures of light-field cameras can be seen by other sub-apertures. Consequently, occluded surfaces are possible to be reconstructed from LF images. So far, Current foreground occlusion elimination approaches based on LF usually extract only the complementary information about background objects among different sub-aperture images to get an occlusion-free center view, which cannot get ideal performances in reconstructing visually realistic and semantically plausible pixels for occluded areas. In this paper, we suggest a easy but efficient LF foreground occlusions elimination way using a dual-pathways fusion network, which is a encoder-decoder network using convolution operations. In our method, we first construct all sub-aperture images(SAIs) as an input tensor and then render it to the encoder to incorporate information between SAIs. In particular, except for a pathway to synthesize center view, we also set another pathway to predict the foreground occlusion. By fusing these two pathways’ outputs, we not only reserve more information belonging to occluded surfaces but also fill the occluded regions with better visual effects. Experimental results indicate that our method is superior to the state-of-the-art approaches and the occlusion-free view looks more realistic. Our source codes will be available.
|