A single Light field (LF) imaging system tends to have a relatively small field of view (FoV). LF stitching can expand the imaging range and obtain more scene information. This is especially valuable for computer vision applications that require global environmental perception. The existing methods of using 2D feature points to perform sub-aperture image (SAI) matching to achieve LF stitching destroy the spatial and angular consistency of the stitched LF because it does not consider the connection between views. In this paper, a novel LF stitching method is proposed, the innovation of this method is to use LF feature points to replace the 2D feature points. The LF feature points we choose are the Harris feature points detected in the scale-disparity space established using fourier disparity layers (FDL) with circular gradient histogram descriptor. The selected LF feature points contain the coordinate information of feature points on the central SAI, scale information, and disparity information. Based on the disparity values and feature points on the central SAI, the feature points on each SAI can be obtained through disparity transformation and color difference threshold. For each pair of SAIs, the stitching method based on line-guided warping and line-point constraint is then applied to ensure correct alignment of the image content. Experimental results show that using LF feature points can effectively stitch 4D LFs, and preserve the spatial and angular consistency of the stitched LF compared with directly using the image stitching corresponding SAIs based on 2D feature points.
KEYWORDS: Diffusion, Data modeling, RGB color model, Image processing, Mathematical optimization, Education and training, Visualization, Deep learning, Visual process modeling, Denoising
Disparity estimation is crucial for light field applications. How to improve the accuracy of disparity estimation in occluded areas, textureless regions, and non-Lambertian surfaces remains a pressing issue. Disparity estimation methods generally consist of two stages: initial disparity estimation and disparity optimization. Disparity optimization methods can be divided into guided filtering-based methods and deep learning-based methods. The first type of method is based on the boundary information of the RGB image to locate the disparity boundaries, while other regions are constrained by piecewise smooth priors. The second type of method relies on deep learning to learn the mapping relationship between the light field and the real disparity from labeled datasets, and can obtain more accurate disparity images. The prior assumptions of the first kind of methods for disparity images are not accurate enough and are greatly affected by RGB images. The second type of method depends on labeled datasets and lacks generalization ability for data collected from different systems. In recent years, the ability of generative models to mine and represent prior information in data has become increasingly prominent. This paper proposes a disparity optimization method based on a conditional diffusion model. This method learns prior information about disparity images from existing public datasets and uses a conditional diffusion model to generate a disparity image with higher accuracy, conditioned on the initial disparity image estimated from the light field. Experimental results show that the prior information about scene disparity learned by this method is more comprehensive than the piecewise smooth property, is not affected by the RGB image, and has stronger generalization ability.
This paper presents a neural network designed for light field (LF) disparity estimation. We improve the network's capability to use spatial and geometric information from light field data by:1. Incorporating positional encoding;2. Adding edge attention mechanisms. The positional encoding aids in deciphering the 3D structure of scenes, which is crucial for accurate LF disparity estimation. Meanwhile, edge attention directs the network to prioritize edge details, enabling the construction of more precise disparity maps. Additionally, edge attention ensures global consistency in disparity estimates, particularly in areas with prominent object edges and limited texture, where it can minimize estimation uncertainty. The attention mechanism also selectively refines features from each view, further boosting the accuracy of disparity estimation. Experiments demonstrate our model's improved accuracy, underscoring the effectiveness of our approach in enhancing LF disparity estimation techniques. The proposed method not only enhances performance but also streamlines the network architecture, making it more scalable and suitable for diverse computer vision scenarios.
The spatial-angular coupling relationship of light field data is fundamental for the scene-disparity estimation. Occlusion, smoothing and noise are major challenges in disparity estimation of light field. Based on the special geometric structure of bi-plane parameterized light field data, we proposed a novel multi-cost loss light field disparity estimation network. The neural network consists of three modules. Firstly, in order to fully utilize the occlusion information contained in the light field, we divide the light field data into four subsets, and a weight shared network is designed to obtain four initial disparity maps, which embody different occlusion situations. Then, the gradient information of the center view sub-aperture image is used to pick the credible disparity from the initial disparity maps. Lastly, a convolutional neural network is designed to further improve the robustness of the merged disparity map in smooth and noisy regions, while maintaining the structural information of the scene. Experimental results on both synthetic and real datasets show that the proposed method can obtain higher-precision disparity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.