Post-wildfire vegetation cover damage and loss can escalate the risks of the secondary disasters such as flood, landslide, and water contamination, particularly in a major wildfire affected region where human settlements are situated. In assessments of the secondary disaster risks, the post-wildfire vegetation cover change is a key factor in influencing the distribution and intensity of the risks. In this work, a processing framework for mapping post-wildfire vegetation cover changes through information fusion has been generated and tested using Landsat8 and WorldView imagery data. The test site was the boreal forest region surrounding Fort McMurray, Alberta, Canada, affected by a massive wildfire in May 2016. The use of WorldView data revealed more variation details in distribution of the vegetation cover burn damages than use of Landsat data. Moreover, the uncertainty in vegetation burn severity using Landsat-based Differenced Normalized Burn Ratio (dNBR) index exists in the areas with low dNBR reading values due to the sub-pixel effect.
Floods are the most common disaster in Canada. As results of rapid urbanization and climate changes, both frequency and risks of floods have been increased in Canadian urbanized areas, where the disasters have usually costlier impacts than in rural areas. Imagery data and technologies of optical remote sensing are helpful and can be applied for urban flood response and pre-disaster preparation. Especially high and very high optical remote sensing can be used for precise mapping of the floodwater distribution in dense urban areas and providing key information for disaster response management. In addition, the geospatial information about urban land surface and urban growth derived from optical remote sensing imagery can be the key inputs for urban flood risk analyses.
In recent years, several case studies for different urban flood types, including fluvial (Calgary 2013, Ottawa-Gatineau 2017) and pluvial (the Greater Toronto Area) floods in Canada, have been carried out at Canada Centre for Mapping and Earth Observation, Natural Resources Canada. Methodologies/framework for urban floodwater mapping have been developed based on high resolution optical data, as well as the impacts of urban growth on the urban flash flood risks have been investigated using model simulations with remote sensing derived maps as inputs. This presentation demonstrates results from three Canadian urban flood case studies and introduces remote-sensing-based methodologies for different types of urban floods.
Urban floods, especially those in dense built-up areas, present high impacts to resident populations and infrastructure. Real-time geographic extents of flooded areas delineated using remote sensing data and technologies is one of the key information inputs for effective disaster management and rapid rescue response. Images from visible band remote sensors are the most common and cost-effective for the real-time applications. Based on an understanding of the differing characteristics of floodwater and those of urban land surface classes, a robust method has been developed and automatized to extract floodwater using RGB band DNs. The methodology has been applied to delineate flood extent visible in imagery from very high-resolution aerial image data. The methodology development involved rule development, segment- and pixel-based feature analysis, automated feature extraction and result validation processing. The accuracies for the visible floodwater class are above 0.8394% and the overall accuracies are above 0.9668% at both pixel and segment levels for three test sites with diverse urban landscapes.
High-resolution red-green-blue (RGB) images from remote sensors, such as those carried on aircrafts, UAVs, satellites, and the International Space Station (ISS), are cost-effective data sources for real-time emergency response applications. We describe an assessment undertaken on spectral behaviors to evaluate the effectiveness of two high-resolution RGB image datasets for mapping and monitoring of floodwater extent in dense urban areas. The assessment was as part of a case study of the Calgary 2013 flood event. The input imagery included very high-resolution aerial photos and imagery acquired with the SERVIR Environmental Research and Visualization System (ISERV) carried on the ISS. The results demonstrate the complementary nature of these two RGB image sets in providing effective urban floodwater mapping for real-time response. The aerial photos with higher spatial resolution and less atmospheric effect can provide the details about the floodwater distribution; the images from ISERV-ISS can provide the temporal variation of floodwater distribution.
A practical processing framework for EO-based detection of building damage in dense urban areas is proposed based on pre- and post-event shadow differencing. The basic data set used for the detection of damaged buildings includes LiDAR and multispectral images with high spatial resolution. The typical building damage types after a major earthquake, such as height-reduced, overturn collapse and inclination, have been considered in this study. Through a scenario case study based on simulations of both building damage and shadow, understandings of the relationship between shadow and building damage are improved for real-time response practices.
Timeliness is a critical requirement for the provision of information during disasters such as floods in urban areas. Images from RGB remote sensors (such as those carried on satellites, aircraft, UAVs and on the International Space Station) are potentially cost-effective data sources for real-time applications. This paper describes work undertaken to evaluate two high resolution RGB image datasets for rapid response mapping and monitoring of urban visible flood water extent. Our overall goal is to develop a robust methodology for universally applicable extraction of the flooded area information. The methods and results are demonstrated as a case study of the characterization and delineation of visible flood extent and its changes that occurred during the June 2013 city of Calgary flood event. The input imagery included very high resolution aerial photography and imagery acquired with the International Space Station’s (ISS) SERVIR Environmental Research and Visualization System (ISERV). The methodology development involved analysis and comparison of the spectral responses of urban land surface types, shadows and turbid flood water. Based on an understanding of these spectral properties, a universally applicable method was developed and assessed to extract visible flood water from RGB imagery.
Accurate and frequent monitoring of land surface changes arising from oil and gas exploration and extraction is a key requirement for the responsible and sustainable development of these resources. Petroleum deposits typically extend over large geographic regions but much of the infrastructure required for oil and gas recovery takes the form of numerous small-scale features (e.g., well sites, access roads, etc.) scattered over the landscape. Increasing exploitation of oil and gas deposits will increase the presence of these disturbances in heavily populated regions. An object-based approach is proposed to utilize RapidEye satellite imagery to delineate well sites and related access roads in diverse complex landscapes, where land surface changes also arise from other human activities, such as forest logging and agriculture. A simplified object-based change vector approach, adaptable to operational use, is introduced to identify the disturbances on land based on red–green spectral response and spatial attributes of candidate object size and proximity to roads. Testing of the techniques has been undertaken with RapidEye multitemporal imagery in two test sites located at Alberta, Canada: one was a predominant natural forest landscape and the other landscape dominated by intensive agricultural activities. Accuracies of 84% and 73%, respectively, have been achieved for the identification of well site and access road infrastructure of the two sites based on fully automated processing. Limited manual relabeling of selected image segments can improve these accuracies to 95%.
The combination of rapid global urban growth and climate change has resulted in increased occurrence of major urban flood events across the globe. The distribution of flooded area is one of the key information layers for applications of emergency planning and response management. While SAR systems and technologies have been widely used for flood area delineation, radar images suffer from range ambiguities arising from corner reflection effects and shadowing in dense urban settings. A new mapping framework is proposed for the extraction and quantification of flood extent based on aerial optical multi-spectral imagery and ancillary data. This involves first mapping of flood areas directly visible to the sensor. Subsequently, the complete area of submergence is estimated from this initial mapping and inference techniques based on baseline data such as land cover and GIS information such as available digital elevation models. The methodology has been tested and proven effective using aerial photography for the case of the 2013 flood in Calgary, Canada.
Information on land conversion to modern urban use is needed for many studies such as the impact of urbanization on environmental quality. Although extensive remote sensing research has been undertaken to detect conversion of nonurban to urban lands, little effort has been directed at assessing modernization of existing built-up land. Detection and quantification of this class of urban growth present significant challenges since the difference between radiometric signatures before and after “land modernization” is much more subtle and complicated than the case of conversion from typical rural to impervious urban land surfaces. A target-driven approach is presented for an efficient extraction of built-up land change distribution that provides superior results to those based on the traditional data-driven land cover approaches. The extraction strategy, integrating pixel- and object-based methodologies, is comprised of three components: delineation of the baseline built-up areas, detection of the areas that have undergone change, and integration of targeted change features to generate a final built-up land change map. A case study was carried out using RapidEye and SPOT5 images over suburban Beijing, China. The overall accuracy of built-up change mapping is about 91% and exceeds accuracies achievable by pixel or segment processing used in isolation.
This study explores a spatiotemporal comparative analysis of urban agglomeration, comparing the Greater Toronto and Hamilton Area (GTHA) of Canada and the city of Tianjin in China. The vegetation–impervious surface–soil (V–I–S) model is used to quantify the ecological composition of urban/peri-urban environments with multitemporal Landsat images (3 stages, 18 scenes) and LULC data from 1985 to 2005. The support vector machine algorithm and several knowledge-based methods are applied to get the V–I–S component fractions at high accuracies. The statistical results show that the urban expansion in the GTHA occurred mainly between 1985 and 1999, and only two districts revealed increasing trends for impervious surfaces for the period from 1999 to 2005. In contrast, Tianjin has been experiencing rapid urban sprawl at all stages and this has been accelerating since 1999. The urban growth patterns in the GTHA evolved from a monocentric and dispersed pattern to a polycentric and aggregated pattern, while in Tianjin it changed from monocentric to polycentric. Central Tianjin has become more centralized, while most other municipal areas have developed dispersed patterns. The GTHA also has a higher level of greenery and a more balanced ecological environment than Tianjin. These differences in the two areas may play an important role in urban planning and decision-making in developing countries.
Natural resources development, spanning exploration, production and transportation activities, alters local land surface at various spatial scales. Quantification of these anthropogenic changes, both permanent and reversible, is needed for compliance assessment and for development of effective sustainable management strategies. Multi-spectral high resolution imagery data from SPOT5 and RapidEye were used for extraction and quantification of the anthropogenic and natural changes for a case study of Alberta bitumen (oil sands) mining located near Fort McMurray, Canada. Two test sites representative of the major Alberta bitumen production extraction processes, open pit and in-situ extraction, were selected. A hybrid change detection approach, combining pixel- and object-based target detection and extraction, is proposed based on Change Vector Analysis (CVA). The extraction results indicate that the changed infrastructure landscapes of these two sites have different footprints linked with their differing oil sands production processes. Pixeland object-based accuracy assessments have been applied for validation of the change detection results. For manmade disturbances, other than fine linear features such as the seismic lines, accuracies of about 80% have been achieved at the pixel level while, at the object level, these rise to 90-95%. Since many disturbance features are transient, the land surface changes by re-growth of vegetation and the capability for natural restoration on the mining sites have been assessed.
Automated image endmember extraction from hyperspectral imagery is a challenge and a critical step in spectral mixture
analysis (SMA). Over the past years, great efforts were made and a large number of algorithms have been proposed to
address this issue. Iterative error analysis (IEA) is one of the well-known existing endmember extraction methods. IEA
identifies pixel spectra as a number of image endmembers by an iterative process. In each of the iterations, a fully
constrained (abundance nonnegativity and abundance sum-to-one constraints) spectral unmixing based on previously
identified endmembers is performed to model all image pixels. The pixel spectrum with the largest residual error is then
selected as a new image endmember. This paper proposes an updated version of IEA by making improvements on three
aspects of the method. First, fully constrained spectral unmixing is replaced by a weakly constrained (abundance
nonnegativity and abundance sum-less-or-equal-to-one constraints) alternative. This is necessary due to the fact that only
a subset of endmembers exhibit in a hyperspectral image have been extracted up to an intermediate iteration and the
abundance sum-to-one constraint is invalid at the moment. Second, the search strategy for achieving an optimal set of
image endmembers is changed from sequential forward selection (SFS) to sequential forward floating selection (SFFS)
to reduce the so-called "nesting effect" in resultant set of endmembers. Third, a pixel spectrum is identified as a new
image endmember depending on both its spectral extremity in the feature hyperspace of a dataset and its capacity to
characterize other mixed pixels. This is achieved by evaluating a set of extracted endmembers using a criterion function,
which is consisted of the mean and standard deviation of residual error image. Preliminary comparison between the
image endmembers extracted using improved and original IEA are conducted based on an airborne visible infrared
imaging spectrometer (AVIRIS) dataset acquired over Cuprite mining district, Nevada, USA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.