The study of land use and land cover (LULC) changes is essential to understand the impact of human activities on the environment. The North of Algeria is a region that experiences high rates of change in LULC, making it a suitable study area. In this research, the potential of Sentinel-2 attributes for LULC classification in this region is evaluated using a deep learning-based approach. To improve the efficiency of the model, six reflectance-based indices are calculated to highlight the region of interest. The results are compared to the USGS land cover change data and show promising LULC change detection. In order to verify the presence of missed classes in our land use/land cover classification results, we employed a CNN-object detection method using high-resolution Planetscope images. This study demonstrates the potential of Sentinel-2 attributes for accurate LULC classification and change detection in the North of Algeria, which can be useful for monitoring land use patterns and planning sustainable land management practices.
Due to the complex and inhomogeneous structure of biological tissues, the analysis of imaging data collected with various optical biopsy methods is often complicated and time consuming. The major challenge here is to understand the peculiarities of light propagation and link it with advanced image/data classification pipelines. This presentation considers the application of the novel Artificial Intelligence (AI) based methods to the inverse problem of light transport in scattering media such as human skin.
A spectral image classification pipeline based on Artificial Neural Networks (ANNs) has been developed by implementing and training several configurations of ANNs classifiers that fit for the scattering and absorption properties of the tissues. The training of the ANNs has been performed by the further developed unified Monte Carlo-based computational framework for light transport in scattering media.
The hyperspectral data is acquired at each pixel as a function of time, by varying the illumination/detection wavelength and polarization of light. The results of nearly real-time chromophore mappings for parameters such as distributions of melanin, blood vessels, oxygenation, simulation of BSSRDFs, reflectance spectra of human tissues, corresponding colours and 3D rendering examples of human skin appearance will be presented and compared with the exact analytical solutions, phantom studies, traditional diffuse reflectance spectroscopic point measurements and advanced Spatial Frequency Domain Imaging (SFDI) technique.
Computer simulation and training are accelerated by parallel computing on Graphics Processing Units (GPUs) using Compute Unified Device Architecture (CUDA) and Cloud-based environment. Open-source machine learning frameworks (e.g. Tensorflow) are used to measure and validate each ANN’s performance.
In the current report, we present further developments of a unified Monte Carlo-based computational framework and explore the potential of the emerging deep-learning neural networks for the determination of human skin optical properties. The hyperspectral data is acquired at each pixel as a function of time, by varying the illumination/detection wavelength and polarization of light. Subsequently, the signature of the detected signal within the tissues is estimated by a deep learning algorithm with supervised training based on a Monte Carlo modelling and then fit for the scattering and absorption properties of the tissue. The algorithm provides an estimation of parameters such as distributions of melanin, blood vessels, oxygenation, assessment of hyper vascularization and metabolism which are particularly critical for assessment of darkly and lightly pigmented skin lesions including moles, freckles, vitiligo, etc. The results of simulations are compared with exact analytical solutions, phantom studies and traditional diffuse reflectance spectroscopic point measurements. The computational solution is accelerated by the graphics processing units (GPUs) in a cloud-computing environment providing near-instant access to the results of analysis.
In the current report we present further developments of a unified Monte Carlo-based computational model and explore hyperspectral modelling of light interaction with volumetrically inhomogeneous scattering tissue-like media. The developed framework utilizes voxelized representation of the medium and considers spatial/volumetric variations in both structural e.g. surface roughness and wavelength-dependant optical properties. We present the detailed description of algorithms for modelling of light-medium interactions and schemes used for voxel-to-voxel photon packet transitions. The results of calculation of diffuse reflectance and Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) are presented. The results of simulations are compared with exact analytical solutions, phantom studies and measurements obtained by a low-cost experimental system developed in house for acquiring shape and subsurface scattering properties of objects by means of projection of temporal sequences of binary patterns. The computational solution is accelerated by the graphics processing units (GPUs) and compatible with most standard graphics/ and computer tomography file formats.
We propose a method to extract special objects in images of medieval books, which generally represent, for example, figures and capital letters. Instead of working on the single-pixel level, we consider superpixels as the basic classification units for improved time efficiency. More specifically, we classify superpixels into different categories/objects by using a bag-of-features approach, where a superpixel category classifier is trained with the local features of the superpixels of the training images. With the trained classifier, we are able to assign the category labels to the superpixels of a historical document image under test. Finally, special objects can easily be identified and extracted after analyzing the categorization results. Experimental results demonstrate that, as compared to the state-of-the-art algorithms, our method provides comparable performance for some historical books but greatly outperforms them in terms of generality and computational time.
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
The massive digitization of books and manuscripts has converted millions of works that were once only physical into electronic documents. This conversion has made it possible for scholars to study large bodies of work, rather than just individual texts. This has offered new opportunities for scholarship in the humanities. Much previous work on digital collections has relied on optical character recognition and focused on the textual content of books. New work is emerging that is analyzing the visual layout and content of books and manuscripts. We present two different digital humanities projects in progress that present new opportunities for extracting data about the past, with new challenges for designing systems for scholars to interact with this data. The first project we consider is the layout and spectral content of thousands of pages from medieval manuscripts. We present the techniques used to study content variations in sets of similar manuscripts, and to study material variations that may indicate the location of manuscript production. The second project is the analysis of representations in the complete archive of Vogue magazine over 120 years. We present samples of applying computer vision techniques to understanding the changes in representation of women over time.
We consider the design of an inexpensive system for acquiring material models for computer graphics rendering
applications in animation, games and conceptual design. To be useful in these applications a system must be able
to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in
computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric
structure, and subsurface scattering. To be computationally tractable, material models for graphics must be
compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct
appropriate models for a range of interesting materials, we take the approach of separating out directly and
indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire
the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors.
We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow
the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the
system is to provide models for physically accurate renderings that are visually equivalent to viewing the original
physical materials.
Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.
Numerically modeling the interaction of light with materials is an essential step in generating realistic synthetic
images. While there have been many studies of how people perceive physical materials, very little work has been
done that facilitates efficient numerical modeling. Perceptual experiments and guidelines are needed for material
measurement, specification and rendering. For measurement, many devices and methods have been developed for
capturing spectral, directional and spatial variations of light/material interactions, but no guidelines exist for the
accuracy required. For specification, only very preliminary work has been done to find meaningful parameters for
users to search for and to select materials in software systems. For rendering, insight is needed on the perceptual
impact of material models when combined with global illumination methods.
For computer graphics applications, capturing the appearance parameters of objects (reflectance, transmittance and small scale surface structures), is as important as capturing the overall shape. We briefly review recent approaches developed by the computer graphics community to solve this problem. Excellent results have been obtained by various researchers measuring spatially varying reflectance functions for some classes of objects. We will consider some challenges from two of the remaining problematic classes of objects. First we will describe our experience scanning and modeling the throne of Tutankhamen. The major difficulties in this case were that the base shape was a highly detailed non-convex geometry with complex topology, and the shape was covered by optically uncooperative gold and silver. Then we will discuss some observations from our ongoing project to scan and model historic buildings on the Yale campus. The major difficulties in this second case are quantity of data and the lack of control over acquisition conditions.
Geometric objects are often represented by many millions of triangles or polygons, which limits the ease with which they can be transmitted and displayed electronically. This has lead to the development of many algorithms for simplifying geometric models, and to the recognition that metrics are required to evaluate their success. The goal is to create computer graphic renderings of the object that do not appear to be degraded to a human observer. The perceptual evaluation of simplified objects is a new topic. One approach has been to sue image-based metrics to predict the perceived degradation of simplified 3D models. Since that 2D images of 3D objects can have significantly different perceived quality, depending on the direction of the illumination, 2D measures of image quality may not adequately capture the perceived quality of 3D objects. To address this question, we conducted experiments in which we explicitly compared the perceived quality of animated 3D objects and their corresponding 2D still image projections. Our results suggest that 2D judgements do not provide a good predictor of 3D image quality, and identify a need to develop 'object quality metrics.'
An important goal in interactive computer graphics is to allow the user to interact dynamically with three-dimensional objects. The computing resources required to represent, transmit and display a three dimensional object depends on the number of polygons used to represent it. Many geometric simplification algorithms have been developed to represent the geometry with as few polygons as possible, without substantially changing the appearance of the rendered object. A popular method for achieving geometric simplification is to replace fine scale geometric detail with texture images mapped onto the simplified geometry. However the effectiveness of replacing geometry with texture has not been explored experimentally. In this paper we describe a visual experiment in which we examine the perceived quality of various representations of textured, geometric objects, viewed under direct and oblique illumination. We used a pair of simple large scale objects with different fine-scale geometric detail. For each object we generated many representations, varying the resources allocated to geometry and texture. The experimental results show that while replacing geometry with texture can be very effective, in some cases the addition of texture does not improve perceived quality, and can sometimes reduce the perceived quality.
KEYWORDS: 3D modeling, Scanners, Image registration, Cameras, 3D scanning, Laser scanners, Reverse modeling, Data modeling, 3D image processing, Algorithm development
We describe a project to construct a 3D numerical model of Michelangelo's Florentine Pieta to be used in a study of the sculpture. Here we focus on the registration of the range images used to construct the model. The major challenge was the range of length scales involved. A resolution of 1 mm or less required for the 2.25 m tall piece. To achieve this resolution, we could only acquire an area of 20 by 20 cm per scan. A total of approximately 700 images were required. Ideally, a tracker would be attached to the scanner to record position and pose. The use of a tracker was not possible in the field. Instead, we used a crude-to-fine approach to registering the meshes to one another. The crudest level consisted of pairwise manual registration, aided by texture maps containing laser dots that were projected onto the sculpture. This crude alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was further refined using a variation of the ICP algorithm developed by Besl and McKay. In the application of ICP to global registration, we developed a method to avoid one class of local minima by finding a set of points, rather than the single point, that matches each candidate point.
A method for rapidly generating target signatures with limited user interaction and limited computing resources is presented. Simple parameterized generic models are used to represent classes of targets. Within the limits of the allowable parametric variation, as many steps as possible in determining the signature are precomputed. Examples are given of generic models of bridges and dams.
The directional characteristics of surface emittance and reflectance have a significant impact on the radiance of a target. The directional characteristics of a surface are completely specified by the bidirectional reflectance distribution function (BRDF). In this paper a practical implementation of a method for calculating directional reflection and emission using the BRDF for the purpose of simulating infrared scenes is described. The implementation is based on the existing Georgia Tech Infrared Signature Code (GTSIG) and the semi-empirical Sandford-Robeitson BRDF model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.