Light field (LF) imaging, which can capture spatial and angular information of light-rays in one shot, has received increasing attention. However, the well-known LF spatio-angular trade-off problem has restricted many applications of LF imaging. In order to alleviate this problem, this paper put forward a dual-level LF reconstruction network to improve LF angular resolution with sparselysampled LF inputs. Instead of using 2D or 3D LF representation in reconstruction process, this paper propose an LF directional EPI volume representation to synthesize the full LF. The proposed LF representation can encourage an interaction of spatial-angular dimensions in convolutional operation, which is benefit for recovering the lost texture details in synthesized sub-aperture images (SAIs). In order to extract the high-dimensional geometric features of the angular mapping from low angular resolution inputs to high angular full LF, a dual-level deep network is introduced. The proposed deep network consists of an SAI synthesis sub-network and a detail refinement sub-network, which allows LF reconstruction in a dual-level constraint (i.e., from coarse to fine). Our network model is evaluated on several real-world LF scenes datasets, and extensive experiments validate that the proposed model outperforms the state-of-the-arts and achieves a better reconstruct SAIs perceptual quality as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.