Water body extraction plays an important role in flood control and the utilization of water resources. With the launch of China’s first high-resolution (50m) geostationary optical GF-4 satellite at the end of December 2015, the wide-swath (400km) and high-frequency (up to minutes) imaging capabilities have been greatly improved, which provides new possibilities for rapid and accurate water body monitoring. To explore the potential of GF-4 satellite in water body monitoring, this paper proposes a water body extraction method based on the temporal variability of near infrared (NIR) spectral features. For a series of preprocessed and coregistered GF-4 images, one of them is chosen as the base image whose NIR band (B5) thresholding is firstly applied to eliminate most of the non-water regions. Then, for each pixel, the variance of B5 radiance values of all images is calculated to obtain a variogram, and pixels whose variogram values are larger than a certain threshold given by the OTSU algorithm are further eliminated. Finally, the final water body extraction result can be obtained after post-classification processing. To evaluate the efficacy of the proposed method, two groups of GF-4 datasets with complex water distribution are selected in the areas of the middle and lower reaches of Yangtze River in China. Experimental results demonstrate that thanks to the high-frequency and high-resolution characteristics of GF-4, the proposed method can extract more tiny waters and effectively remove built-up areas, and is superior to the extraction accuracy of water index way by about 4%.
To obtain a complete representation of scene information in high spatial resolution remote sensing scene images, an increasing number of studies have begun to pay attention to the multiple low-level feature types-based bag-of-visual-words (multi-BOVW) model, for which the two-phase classification-based multi-BOVW method is one of the most popular approaches. However, this method ignores the information of feature significance among different feature types in the score-level fusion stage, thus affecting the classification performance of the multi-BOVW methods. To address this limitation, a feature significance-based multi-BOVW scene classification method was proposed, which integrates the information of feature separating capabilities among different scene categories into the traditional two-phase classification-based score-level fusion framework, realizing different treatments for different feature channels in classifying different scene categories. Experimental results show that the proposed method outperforms the traditional score-level fusion-based multi-BOVW methods and effectively explores the feature significance information in multiclass remote sensing image scene classification tasks.
Land-cover composition and change are important factors that affect global ecosystem. As an effective means for Earth
observation, remote sensing technique has been widely applied in extracting land-cover information and in monitoring
land-use and land-cover change, among which image classification becomes a key issue. Most existing studies about
object-oriented classification use traditional low-level feature extraction methods or statistics of low-level features to
represent objects in an image, which, to a large extent, loses the information in remote sensing images. Therefore, in
order to facilitate better description of these objects in object-oriented classification, this paper introduces a state-of-theart
feature representation method called bag-of-visual-words (BOVW) to construct the middle-level representations
instead of low-level features. Based on the idea of BOVW, this paper proposes a BOVW based framework for objectoriented
land-cover classification. For a given remote sensing image, it first applies a pixel-level local feature extraction
strategy to construct a visual vocabulary by K-means clustering with each cluster as a visual word. Then the image is
segmented into objects and each object is represented as a histogram of visual word occurrences by mapping the local
pixel-level features in this object to the learned visual words. Finally, the calculated histogram is considered as the final
representation of an object which can be used for further classification tasks. Experimental results on a SPOT5 satellite
image, acquired from the Changping County in Beijing, China, in 2002, show that the proposed method is superior to the
traditional low-level feature based method in classification accuracy by about 2%.
In this paper, image fusion algorithm are used to improve the quality of HJ-1 B IRS LST products. The HJ-1 B IRS LST
data with multi temporal are transformed to the similar temporal based on a fusion framework, and the MODIS LST
products are used as reference data. There are two research core: 1) How to simplify the fusion model to obtain more
robustness data production result; 2)How to deal with the cloud and cloud shadow region. A algorithm process for HJ-1
B LST products is proposed, and a specific experiment showed the application prospect of the algorithm process.
Land cover disturbance is an abrupt ecosystem change that occurs over a short time period, such as flood, fire, drought
and deforestation. It is crucial to monitor disturbances for rapid response. In this paper, we propose a time series analysis
method for monitoring of land-cover disturbance with high confidence level. The method integrates procedures including
(1) modeling of a piece of history time series data with season-trend model and (2) forecasting with the fitted model and
monitoring disturbances based on significance of prediction errors. The method is tested using 16-day MODIS NDVI
time series to monitor abnormally inundated areas of the Tongjiang section of Heilongjiang River of China, where had
extreme floods and bank break in summer 2013. The test results show that the method could detect the time and areas of
disturbances for each image with no detection delay and with high specified confidence level. The method has few
parameters to be specified and less computation complexity so that it could be developed for monitoring of land-cover
disturbance on large scales.
Compared with one single image, satellite image time series (SITS) can capture the dynamic changes in land cover types, thus achieving a more comprehensive and accurate land cover classification map. Due to decades of data acquisition and new high temporal resolution sensors, SITS is becoming more available. Corresponding SITS analysis techniques need to be further developed. Most satellite images are multispectral, namely, multivariate. However, multivariate time series analysis techniques are less mature compared with univariate time series. There seems to be a lack of a robust and accurate similarity measure between multivariate time series for SITS clustering. In this paper, we propose a novel method to transform multivariate SITS into univariate SITS while the useful information is kept as much as possible. And then advanced univariate time series similarity measures can be adopted to achieve SITS clustering. The proposed method is tested on Landsat-TM SITS dataset and shows a better clustering result than ordinary multivariate time series similarity measure. In addition, the overall computing time may be reduced due to dimension reduction.
At-satellite reflectance based tasseled cap parameters were extracted from HJ-1A/B satellite imagery, the charge coupled device (CCD) data of the Chinese environmental satellites that were launched on September 6, 2008. Sixteen scenes selected from the four sensors (HJ 1A CCD1, HJ 1A CCD2, HJ 1B CCD1, HJ 1B CCD2), respectively were used. The objectives are evaluating the consistency of the tasseled cap parameters for four sensors and proposing combined tasseled cap parameters for all four HJ-1A/B CCD. The results indicated that the direction of corresponding tasseled cap vectors of four sensors were almost the same. Then a combined at-satellite reflectance-based tasseled cap transformation was developed based on eight HJ-1A/B CCD scenes representing a variety of landscapes of China in both leaf-on and leaf-off seasons. Extraction combines the principle component transform with Gram-Schmidt Orthogonalization (GSO) process. The accurate brightness was obtained with the help of the first principal component eigenvector, and then greenness and wetness. The fourth one was obtained by the GSO. The first two dimensions (the brightness and the greenness) typically capture over 98%, and the brightness, greenness, and wetness account for over 99.9% of the total spectral variance. The fourth one occupies a very low proportion.
The HJ-1 A and HJ-1 B satellites were launched on September 6, 2008 from China. The radiometric normalization of charge coupled device (CCD) images is still challenging work. In this paper, an automatic algorithm for relative radiometric normalization between HJ-1A/B CCD images and Landsat TM is presented. This method directly normalizes the digital numbers (DN) of HJ-1A/B CCD images, band by band, to surface reflectance. A united linear relationship between the DN of the target images and the surface reflectance of the referenced images was derived, and the applicable conditions are described here. The iteratively reweighted modification of the multivariate alteration detection (IR-MAD) transformation was used to automatically select pseudoinvariant features (PIFs). This procedure is simple, fast, and completely automatic. The algorithm was applied to normalize three subregions of different HJ-1A/B CCD images. The results show that the retrieval quality of the surface reflectance does meet the requirements of quantitative remote sensing.
Artificial object identification and image classification are two basic issues in remote sensing (RS) information extraction. All kinds of methods, from the pixel-based to window-based, have been tried respectively in two domains for many years, but the accuracy is not well until now. Two obvious limitations explain the reasons. One is that the processing cell can’t correspond with the true target in the real world, and the other is the feature, which participates in the identification procedure, is far from enough to describe the intrinsical characteristics of the object of interest. During recent two years, an object-oriented classification method is put forward to supply these gaps of the conventional classification method. On the one hand, by using the segmentation technique, the pixel clusters are extracted based on their similarities to form so-called object having thematic meaning; on the other hand, vectorization of these objects is performed by integrating the GIS (geographical information system) idea into the RS which makes it possible to describe the various features of each object, such as shape information and its spatial relationship to neighboring object. In this study, the authors attempt to use this new method to artificial object identification by taking example for ship extraction with the help of one spectral feature and eight shape features. Results indicate the object-oriented classification is feasible in practice, and it opens a new way for artificial object extraction.
Hyperspectral image possesses incomparable advantage over spaceborne multispectral image when it is employed to quantitatively retrieve these parameters such as vegetation type, coverage, biomass, bare soil moisture, etc. This paper focuses on crucial issues present in the pre-processing of hyperspectral image: band selection, edge radiant correction, tangent correction and spectral reflectivity conversion, exemplified by a case study in which modular airborne OMIS-I imaging spectrometer data are employed to evaluate desertification. The author gives comprehensive consideration to the statistic characteristics of each spectral band, diagnostic spectral reflection of different targets and the purpose of practical application, and fixed upon 41 applicable bands after trying different bands. In the course of edge radiant correction, one correction method based on histogram matching was used, and its result was satisfactory. In addition, tangent correction directing against tangent distortion was carried out, which enriched the normal geometric rectification. Lastly, during the process of surface feature spectral reflectivity conversion, the author converted symbolic model into statistic model by employing some necessary theoretical inference and parameter-setting. The result suggests the quality of OMIS-I data get better improved after these processing and basically can meet the requirements of quantitative retrieval for desertification evaluation.
For remote sensing imagery, every sensor system has unique system response, namely, point spread function (PSF), or modulation transfer function (MTF), which can be considered as sampling kernel given a prior. Sampling process like this doesn't satisfy Shonnon-Whittaker Representation Theorem's requirements, in such case, exact reconstruction is impossible. We have to look for optimal reconstruction in the sense of mean square, i.e. L 2 -norm. In this paper, we are mainly concerned with the applications of optimal reconstruction theory in remote sensing image processing, our aim is to develop a new resampling method for image magnification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.