Aerosols and clouds have been scientifically considered distinct from theoretical point of view. The two components share the same physical meaning, namely particles suspended in the air, even if the chemical or physical features can differentiate. Form the radiometric view it has been a challenging task to separate the two atmospheric components in an exact way. Recently scientists are also discussing about a continuum between aerosols and clouds. In this study we use calibrated images of an all-sky camera with the aim of exploring the features that can differentiate the two regions in terms of radiometric magnitudes. An intense smoothing is applied, and the spatial derivatives are performed on the red channel radiances. These derivatives are almost zero in the cloud-free area and sensibly different from zero in the rest of the image. Applying dynamic thresholds to Blue-to-Red Ratio (BRR), we further determine the cloudy region of the sky. Then, we define aerosol-cloud transition zone as the non-cloudy sky zone with intense directional derivatives of the radiances in the red channel. This transition zone shows different radiometric characteristics with respect to cloud and cloud-free regions, for example in terms of BRR distribution of the pixels.
Clouds are essential in climate, especially to evaluate the radiative balance in the Earth atmosphere and, their contribution depends on the type of cloud. In addition, cloud classification plays an important role in the development of different research and technological fields such as solar photovoltaic energy. We use ground-based zenith observations of Cloud Optical Depth (COD) and Cloud Base Height (CBH), at one-minute intervals, to develop a clustering algorithm. It is based on non-supervised machine learning using k-means function. Due to the intrinsic characteristics of the measuring instruments, high-altitude clouds with large COD are not accurately represented. For this reason, a classification into six categories is performed. Regarding to COD, our machine learning method detects three COD clusters separated at 3.2 and 24.5. On the other hand, the three CBH clusters well identify low-, mid- and high-clouds, with centroids around 1500 m, 5399-6240 m, and 9589 m, respectively. A slight increase in these CBH boundaries with COD is also observed. Our clustering method is consistent and robust since it does not present any sensitivity regarding to the temporal window used to perform the clustering. The resulting clusters are consistent and in line with the cloud classification established by the WMO.
The aim of this study is to predict the main aerosol properties in the atmosphere, Aerosol Optical Depth (AOD) and Angstrom Exponent (AE), with the aid of machine learning techniques and images from an All-Sky camera. Two different machine learning techniques have been used in this work: a random forest (RF) and an artificial neural network (ANN) with target values furnished by AERONET database. HDR images from the All-Sky camera sited in Burjassot (Spain) have been used. All of them have been taken in a clear-sky condition (without clouds) and with different aerosol depth. Selected images come out with a range from 0 to 0.5 of AOD at 500 nm as reference. The data in the groundbased station are available since the 10th of February of 2020 to the 31th March of 2021 in almost one year of samples. We have developed two ways of building signals combined with the two machine learning methods. Firstly, a signal generated from scattering angles in a single image which is obtained as the average of relative irradiance (RGB) using 100 random points in each scattering angle isoline, obtaining 29 values for each signal. Secondly, the signal has been generated in the same way but from zenith angles isolines of a single image. The main result obtained is that we improve significantly the state of art results of not calibrated images. For example, the red channel improves the percentage of predicted AOD values within the AERONET uncertainties from 62% to 90%-93% using an ANN and the zenith method.
The radiative closure methodologies to obtain Cloud Optical Depth (COD) from Remote Sensing techniques have traditionally relied on one-dimensional (1D) assumptions. These assumptions might be far away from the radiation transport over a realistic three-dimensional (3D) atmosphere, especially in cloudy conditions, as the natural inhomogeneities of clouds are not conveniently represented and treated in 1D models. The differences between the 1D and 3D approaches manifests in the 3D effects: a) the plane-parallel albedo bias and, b) the horizontal transport effect. The plane-parallel albedo bias is usually addressed by means of the Independent Pixel approximation (IPA), that considers each pixel radiatively independent from the others. Nevertheless, the IPA neglects the horizontal transport, entailing bias in the retrievals. In this work, we use the advantages of 3D radiative transfer (RT) to analyze COD and parameterize the 3D biases in terms of the plane-parallel approach. Detailed 3D RT simulations using MYSTIC are performed over two Highly Resolved Large Eddy Simulations cloud fields of known optical thickness. The output radiance is analyzed by a 1D IPA inversion retrieval based on a radiative closure to obtain the COD. The comparison between the retrieved COD fields for diverse illumination conditions and the real COD allow us to study the 3D effects separately and evaluate the retrieval. Our results show radiation enhancement in cloud edges depending on solar, viewing and cloud geometries, that induces a COD underestimation. The 1D approach works well for overcast conditions and underestimates the COD in broken clouds scenarios.
A commercial all-sky-camera is employed to derive a whole-sky product of Cloud Optical Depth. The methodology consists in a radiative closure combining measurements of the blue and red channels with libRadtran 1D monochromatic radiance simulations. Besides, a matrix of data quality Flags is obtained for every COD image. The data quality Flags indicate the reliability of the retrieval at each pixel, and gives information about the method to solve the radiance monochromatic ambivalence. In addition, the Flags product also indicates the presence of out-of-range radiances with respect to the RT simulations. Such out-of-range radiances are related to the neglection of horizontal radiation transport in the 1D plane-parallel approach. A set of around 2000 images during 2020 have been analyzed and COD has been obtained for each pixel. The COD shows values ranging from 0 to 130, with around 83% of the cases between 5 and 30. Our COD results have been validated using the zenith COD retrieval from AERONET. Our results present a very good agreement with AERONET cloud mode retrieval and show a correlation factor of 0.94 and a slope of 0.99 respectively.
All sky-cameras are devices with a very high potential in order to study atmospheric phenomena and were originally designed to obtain the cloud cover. However, methods based in different approaches produce significant differences in the results. State-of-art methods usually offer better performance, thanks to computer vision and machine learning (ML) techniques, than traditional algorithms based on channel ratios using both fixed and adaptive thresholds to classify the pixels of one image as cloud or cloud free. We have developed a cloud cover adaptive threshold algorithm base on Probability Density Function (PDF) of the Blue to Red Ratio (BRR), standing out in: simplicity; ease of implementation; compatibility with any sky-camera in terms of technical requirement and type of image acquisition. The goal of this study is to compare our algorithm with a most fashionable method based on Machine learning, discovering the pros and cons of each one and weather ultimately less can be more. The comparison has been done using a set of 1-year HDR imagery database, representing a wider range of atmospheric scenarios such as clear sky, cloudy, partly cloudy and different types of aerosol conditions and clouds such as cirrus, cumulus, stratum and nimbus. To stablish a quantitative comparison of both methods, a limited set of images has been chosen. The PDF method show better agreement than our ML implementation, with a better performance for all weather conditions, in comparison against our cloud cover database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.