The spatial-angular coupling relationship of light field data is fundamental for the scene-disparity estimation. Occlusion, smoothing and noise are major challenges in disparity estimation of light field. Based on the special geometric structure of bi-plane parameterized light field data, we proposed a novel multi-cost loss light field disparity estimation network. The neural network consists of three modules. Firstly, in order to fully utilize the occlusion information contained in the light field, we divide the light field data into four subsets, and a weight shared network is designed to obtain four initial disparity maps, which embody different occlusion situations. Then, the gradient information of the center view sub-aperture image is used to pick the credible disparity from the initial disparity maps. Lastly, a convolutional neural network is designed to further improve the robustness of the merged disparity map in smooth and noisy regions, while maintaining the structural information of the scene. Experimental results on both synthetic and real datasets show that the proposed method can obtain higher-precision disparity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.