KEYWORDS: Cameras, Thermal imaging cameras, 3D modeling, Profilometers, 3D projection, Projection systems, Visible radiation, Thermography, Point clouds, 3D image processing
Conventional fringe projection profilometers utilize cameras and projectors in the visible spectrum. Nevertheless, some applications require profilometers with a complementary thermal camera for the infrared spectrum. Since the point cloud is computed from pixel correspondences between the visible camera-projector pair, the texture in the visible spectrum is obtained by direct association of color from each image pixel to its corresponding point in the cloud. Unfortunately, the texture from the thermal camera is not straightforward because of the inexistence of pixel-point correspondences. In this paper, a simple interpolation-based method for determining the texture of the reconstructed objects is proposed. The theoretical principles are reviewed, and an experimental verification is conducted using a visible-thermal fringe projection profilometer. This work provides a helpful framework for three-dimensional data fusion for advanced multi-modal profilometers.
Color accuracy is of immense importance in various fields, including biomedical applications, cosmetics, and multimedia. Achieving precise color measurements using diverse lighting sources is a persistent challenge. Recent advancements have resulted in the integration of LED-based digital light processing (DLP) technology into many scanning devices for three-dimensional (3D) imaging, often serving as the primary lighting source. However, such setups are susceptible to color-accuracy issues. Our study delves into DLP-based 3D imaging, specifically focusing on the use of hybrid lighting to enhance color accuracy. We presented an empirical dataset containing skin tone patches captured under various lighting conditions, including combinations and variations in indoor ambient light. A comprehensive qualitative and quantitative analysis of color differences (ΔE00) across the dataset was performed. Our results support the integration of DLP technology with supplementary light sources to achieve optimal color correction outcomes, particularly in skin tone reproduction, which has significant implications for biomedical image analysis and other color-critical applications.
Many water bodies play a crucial role as receiver of several urban basins within the water system of a city, these urban basins often face challenges of pollution and reduction in water flow, such as, the case of the Juan Angola channel in the city of Cartagena, Colombia. Current remote sensing strategies using Landsat and Sentinel-2 satellite imagery, lack the necessary spatial resolution to adequately study such as water bodies. In contrast, higher spatial resolution data, such as the PlanetScope one, allows for better spatial and temporal details. Nevertheless, PlanetScope does not count with the same spectral resolution as Landsat and Sentinel-2, requiring of further processings to extract relevant information. In this paper, we used PlanetScope satellite images, processed through computer vision techniques, to analyze the evolution of the Juan Angola channel, Laguna del Cabrero and Chambac´u over time. Our approach involved extracting water areas from PlanetScope images and comparing these over different periods. Preliminary findings revealed noticeable variations in the area of the channel due to factors such as rainfall and possible illegal human invasion, as well as, the increment in level of contamination observed by means of the Normalized Difference Turbidity Index (NDTI). The images used from PlanetScope offered a more detailed time-series analysis of different hydrographic areas, which is particularly pertinent in the Juan Angola channel.
Calibrating large-range vision systems like UAV cameras is a complex task that often involves costly setups and the potential for errors due to inaccuracies in target fabrication. Traditional UAV surveying software typically estimates camera parameters alongside ground control points, but this method may lack optimal accuracy. Our study explores an alternative: using out-of-focus camera calibration to improve the reliability and accuracy of drone cameras for surveying. In our approach, the UAV camera is positioned several meters away from a low-cost target to ensure focus. We then calibrate the intrinsic camera parameters using an out-of-focus small calibration target, fixing these parameters before flight. For evaluation, we compare this method against the standard approach of estimating UAV camera parameters with survey imagery. Preliminary results suggest that this out-of-focus method offers a reliable and accurate solution for UAV surveying applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.