Obtaining accurate and noise-free three-dimensional (3D) reconstructions from real world scenes has grown in importance in recent decades. In this paper, we propose a novel strategy for the reconstruction of a 3D point cloud of an object from a single 4D light field (LF) image based on the transformation of point-plane correspondences. Considering a 4D LF image as an input, we first estimate the depth map using point correspondences between sub-aperture images. We then apply histogram equalization and histogram stretching to enhance the separation between depth planes. The main aim of this step is to increase the distance between adjacent depth layers and to enhance the depth map. We then detect edge contours of the original image using fast canny edge detection, and combine linearly the result with that of the previous steps. Following this combination, by transforming the point-plane correspondence, we can obtain the 3D structure of the point cloud. The proposed method avoids feature extraction, segmentation and the extraction of occlusion masks required by other methods, and due to this, our method can reliably mitigate noise. We tested our method with synthetic and real world image databases. To verify the accuracy of our method, we compared our results with two different state-of-the-art algorithms. In this way, we used the LOD (Level of Detail) to compare the number of points needed to describe an object. The results showed that our method had the highest level of detail compared to other existing methods.
Visual experience of surface properties relies on accurately attributing encoded luminance variations (e.g., edges and contours) to any one of several potential environmental causes. We examined the role of differences in the local shading directions across sharp contours in (i) identifying occlusion boundaries and (ii) perceiving the depth layout of adjacent surfaces. We used graphical rendering to control the orientation of a simulated light source, and hence the shading direction between adjacent surface regions that met at a common edge. We call the difference in shading direction across the edge the delta shading angle. We found that delta-shaded edges looked like occluding boundaries. We also found that the perceived figure-ground organisation of the adjacent surface regions depended on an assumed lighting from above prior. Shaded regions experienced as convex surfaces illuminated from above were perceived as occluding surfaces in the foreground. We computed an image-based measure of delta shading using the difference in local shading direction (the orientation field) and found this model could reliably account for observer judgments of surface occlusion, better than local (in-)coherence in the orientation of isophotes across the edge alone. However, additional information from co-alignment of isophotes relative to the edge is necessary to explain figure-ground distinctions across a broad class of occlusion events. We conclude that both local and global measures of shading direction are needed to explain perceived scene organisation, and material appearance more generally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.