Open Access
28 June 2012 Survey of computer vision-based natural disaster warning systems
Author Affiliations +
Abstract
With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.

1.

Introduction

Over the past few decades, global warming has become the most pressing environmental problem the world is facing. Although there are various causes of global warming, it is an undeniable fact that the explosive growth in the use of fossil fuels such as petroleum, coal, and natural gas is the major cause of global warming. The world is currently experiencing various natural disasters such as storms, wildfires, droughts, floods, landslides, and tsunamis due to the effects of global warming. According to a report from the Asian Disaster Reduction Center,1 on average, more than 200 million people have been affected by natural disasters every year for the past two decades. For example, a mudslide triggered by heavy rains buried an entire village in the Philippine province of Southern Leyte in February 2006. More than 200 people were killed in the slide with about 1500 people missing. In addition, a strong undersea earthquake, which had a 7.7 magnitude, struck Java Island off the southern coast of Indonesia in July 2006 which triggered a tsunami that swept away wooden buildings and killed at least 500 people. As we have seen from this report, natural disasters have caused huge losses of both life and property. Therefore, when disaster threatens, an early warning of disaster is crucial to reduce the potentially extensive casualties and property damage.

To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing and geographic information systems, GISs, has been receiving widespread interest over the last decade.

Most current natural disaster warning systems are based on remote sensors that depend on certain characteristics of natural disasters. These systems can be categorized into the following types:

  • Smoke and flame detection: most sensor-based flame or smoke detection systems are based on infrared sensors, optical sensors, or ion sensors that depend on certain characteristics of flame and smoke. The most commonly used is the infrared sensor which is capable of detecting flame or heat sources at temperatures of 200°C to 300°C above ambient background temperature, typical of those heat sources that originate from a fire, while rejecting solar radiation reflections and fluctuations in ambient background temperature.2 An optical detector may be used as a smoke detector. When smoke enters the optical chamber and crosses the path of a light beam, some light is scattered by the smoke particles toward the sensor, thus triggering the alarm. In the absence of smoke, the light passes in front of the detector in a straight line.3 Ionization smoke detectors use an ionization chamber and a source of ionizing radiation to detect smoke. This type of smoke detector is more common because it is inexpensive and better at detecting the smaller amounts of smoke produced by flaming fires.4

  • Flood detection: the most popular method to detect floods is to use water level sensors. Pressure sensors detect force per unit area, represented by the product of the mass of water and the acceleration due to gravity, and can show the quantity of water per unit area. The resulting pressure measurement can be directly transformed to water level. Supersonic wave sensors measure the time of travel of a supersonic wave pulse from emitter to receiver reflected by the water surface. Heat based sensors measure the temperature difference between sensor and water. Since the amount of temperature drop of a sensor is proportional to its contact area with water, in this case, the depth information of a sensor in water can be calculated from its temperature change.5

  • Tidal wave detection: the general method for detecting tidal waves or tsunamis is to use a buoy system. The tidal detection buoy system measures small changes in the depth of the deep ocean caused by tidal waves as they propagate past the sensor. This is accomplished by using an extremely sensitive bottom pressure sensor/recorder to measure very small changes in pressure as the waves move past the buoy system.6 Another tidal wave early warning systems (TEWs), is equipped with sensors capable of detecting sea-level oscillations within the tidal frequency band by collecting some measurements from bottom pressure recorders (BPRs), or tsunamometers, tidal gauges (TGs), and wind-wave gauges (WWGs), equipped with either pressure, acoustic, or optical sensors.7

  • Landslide detection: electrical sensors are commonly used for monitoring devices of landslides, such as extensometers, although they are easily damaged by lightning and electromagnetic noise. In contrast, optical fibers are free from such flaws and devices using optical fibers have been developed in recent years. The sensors basically work by detecting changes in the fiber’s transmission of light due to bending or warping of the optical fiber when the ground moves.8

However, these conventional, remote sensor–based warning systems have the following common disadvantages:

  • They are not triggered until particles or pressure waves actually reach the sensors.

  • Some sensors require expensive equipment.

  • They need a high level of maintenance to check for malfunctions.

  • Some systems, such as tidal wave detection systems, require multiple communication links between sensors.

  • They are usually unable to provide any additional information such as location, size, and level of disaster.9 Therefore, when the alarm is triggered, the system manager still needs to confirm the existence of a disaster by visiting the location.

To solve these problems, many current research projects are trying to develop warning systems using vision sensors such as CCD cameras, infrared cameras, and thermal cameras. Unlike conventional remote sensors, vision sensor based disaster warning systems involve a lower equipment cost as they use surveillance cameras that are already installed in many public places. In addition, vision sensors can monitor a larger area because the camera functions as a volume sensor rather than as a traditional point sensor. Moreover, these cameras can easily be used to gather additional information such as the location, size, and degree of disaster, because the system manager can check the surveillance monitor without visiting the location.10

As shown in Fig. 1, vision sensors are installed at remote sites and transfer image sequences to a monitoring server using wired or wireless network. If an emergency situation is detected, the warning system sounds an alarm and sends image sequences of the remote site to the control center for a visual check of whether it is an actual emergency or not.

Fig. 1

A general framework for vision-based natural disaster warning system.

OE_51_7_070901_f001.png

Because natural disaster warnings are receiving more and more attention in recent research, a comprehensive review of various disaster warning techniques which have been developed in recent years is needed. This paper surveys recent studies on warning techniques for four types of natural disasters including wildfire detection and smoke detection, water level detection for flood warning, landslide detection, and coastal zone monitoring for tidal wave alerts, using computer vision and pattern recognition techniques. However, vision based disaster detection is a new research field with relatively few published reports, hence, we introduce a few representative works on the subject of disaster detection. Table 1 summarizes the representative categories of natural disaster warning systems based on visual sensors and their main algorithms.

Table 1

Representative categories of natural disaster warning systems and their main algorithms.

Disaster categoryVisual featuresDecision methods
Wildfire smoke detectioncolor,1114 wavelet coefficients,11,12 motion orientation,11,12,16 histogram of oriented gradients11 intensity entropy,12 feature correlation13random forest,11 support vector machine,13,15 fuzzy finite automata,12 hidden Markov models14 least mean square-based active learning14
Wildfire flame detectioncolor,10,17,18 intensity,10,17 wavelet energy,10 motion orientation,10 infrared signal,19 spectrum of visible light image20fuzzy finite automata,10 linear combination of decision functions,17 fuzzy inference system18
Water level detectionhorizontal edge,5,21 pixel difference,22 optical flow,23 vertical edge24rule-based classifier,5 gap difference function,5 ratio maximization function,21k-means clustering,23 linear regression24
Coastal zone management & forecasting tidal wavesintensity difference,2527 frequency,28,29 scale,28,29 background subtraction,30 active contour model30intensity difference function,26,27 boundary function30
Landslide detectionpixel difference,31,32 pixel histogram,33 color,34 shape34logical intersection,31 point similarity function,33 multilevel objective functions,34 genetic programming32

In the following sections, we discuss each of the disaster warning techniques listed in Table 1.

In our previous short survey,35 some representative studies for detecting natural disasters using computer vision are introduced. However, for a more in depth survey, we added sensor based disaster detection methods and analyzed the advantages and disadvantages between sensor and vision-based disaster detection methods. Furthermore, we added not only further current research on disaster detection, but also addressed landslide detection based on computer vision, which is another issue of interest.

The remainder of this paper is organized as follows. Section 2 introduces vision based wildfire smoke and flame detection techniques. The water level detection method for preventing floods is then introduced in Sec. 3. Section 4 introduces representative methods for coastal zone management as well as forecasting tidal waves. The landslide detection method is also introduced in Sec. 5. Finally, some promising future research directions are outlined in Sec. 6.

2.

Wildfire Smoke and Flame Detection Techniques

Even though there are several optical-based sensors to detect wildfire smoke, these optical-based systems often generate many false alarms due to atmospheric conditions such as clouds, shadows, dust particle formations, and light reflections. Additionally, they are expensive. In contrast, a CCD camera delivers similar performance and is much cheaper than IR cameras and other types of advanced sensors.

In general, a CCD camera for wildfire detection can be installed atop a mountain to monitor a wide area, as shown in Fig. 2. Wildfire detection can be divided into two research categories, wildfire smoke detection and flame detection. Smoke detection is particularly important for early warning systems because smoke usually occurs before the flames arise.11

Fig. 2

CCD camera mounted on a watchtower for wildfire detection, taken from Ref. 11.

OE_51_7_070901_f002.png

Most current wildfire detection systems consist of four steps. The steps include detection of candidate fire regions, feature extraction, a learning classifier, and the classification of fire regions, as shown in Fig. 3. In the first step, the candidate flame or smoke regions are initially detected using a background subtraction method, while nonflame or nonsmoke colored blocks are filtered out using the color probability models. These processes are essential steps for reducing the computational time required for flame or smoke verification. Next, various parameters can be used to characterize the images for classification such as color, texture, motion and shape. After feature extraction, pattern classifier learning is performed based on the feature vectors of the training data. Finally, the candidate flame or smoke regions are classified into predefined classes using the pattern classifiers.

Fig. 3

Block diagram of the wildfire detection procedure.

OE_51_7_070901_f003.png

2.1.

Wildfire Smoke Detection

Despite the significance of smoke detection, there have only been a few studies in this area. Wildfire smoke detection, using video cameras, is challenging because the main characteristics of smoke are constantly uncertain, vague patterns of shape and color.

The main smoke detection methods can be summarized as follows.

Gubbi et al.15 proposed an outdoor smoke detection approach based on wavelets and support vector machines (SVMs). Smoke characterization is carried out by extracting wavelet features from approximate coefficients and three levels of detailed coefficients. The SVM Light implementation for SVM was used in all the experiments. The radial basis function (RBF) kernel is used to map the input vector to a high-dimensional feature space because SVM Eq. (1):

Eq. (1)

k(x,y)=exp(xy22σ2),for someσ>0,
where x is the input vector and y is the input pattern. The symbol, σ, represents a parameter, specified by the user, which determines the width of the effective basis function.

Ham et al.12 proposed a new forest smoke detection approach of analyzing temporal patterns of smoke and fuzzy finite automata (FFA), as shown in Fig. 4. To consider the smoke characteristics over time, the temporal patterns of intensity entropy, wavelet energy, and motion orientation have been used to generate multivariate probability density functions which are applied to FFA for smoke verification. The proposed FFA consists of a set of fuzzy states which are VH=very high, H=high, L=low and VL=very low, and a transition mapping that describes the events that can occur in a specific state and the resulting new state.

Fig. 4

FFA for forest fire smoke verification. The circular nodes represent a specific state, while the transition from the current state to the next state is shown by an arrow with an input event, taken from Ref. 12.

OE_51_7_070901_f004.png

Habiboglu et al.13 proposed a method that uses background subtraction and color thresholds to find the smoke-colored, slow-moving regions in the video. These regions are divided into spatio-temporal blocks and correlation features are extracted from the blocks. Sets of properties that represent both the spatial and the temporal characteristics of smoke regions are used to form correlation descriptors. An SVM classifier is trained and tested with descriptors obtained from video data containing smoke and smoke-colored objects. In the final step of this smoke detection procedure, a confidence value is determined according to the number of positively classified video blocks and their positions. If no neighbor blocks can be classified as smoke, the confidence level is set to 1. If a single neighbor block is classified as smoke, the confidence level is set to 2. If >2 neighbor blocks are classified as smoke, the confidence level of that block is set to 3, which is the highest level of confidence provided by the algorithm.

Ko et al.11 proposed a wildfire smoke detection algorithm that uses temporal-spatial visual features and an ensemble of decision trees called a random forest. In order to detect wildfire smoke using a video camera, temporal-spatial characteristics such as color, wavelet coefficients, motion orientation, and a histogram of oriented gradients are extracted from candidate smoke blocks and the corresponding sections of the preceding 100 frames. After training two random forests, the test candidate blocks are applied to the trained random forests. The final distribution of the ith class, P˜(ci), is generated by averaging each distribution of random forest 1 PT(ci|lt) and random forest 2 PS(ci|lt) using Eq. (2).

Eq. (2)

P˜(ci)=[1Tt=1TPT(ci|lt)+1Tt=1TPS(ci|lt)]2.
In Eq. (2), T is the number of trees, and lt is the distribution of t-th tree. This system chooses ci as the final class of an input block Bb, if P˜(ci) has the maximum value. According to Eq. (2), the candidate block is declared as a real wildfire smoke block if the maximum probability of the average probability distribution in an input block belongs to the class VH or H. Otherwise, a candidate block is declared as a nonsmoke block.

Figure 5 shows nine examples of wildfire smoke detection results on real wildfire smoke and smoke-colored clouds and fog.

Fig. 5

Wildfire smoke detection results on wildfire smoke (Movies 1–5), smoke-colored cloud (Movies 6, 7, and 9) and fog (Movie 8) test videos, taken from Ref. 11.

OE_51_7_070901_f005.png

Töreyin et al.14 also proposed an automatic, video-based, wildfire smoke detection algorithm containing the following four sub-algorithms: (1) slow-moving video object detection, (2) gray region detection, (3) rising object detection, and (4) shadow elimination. Each sub-algorithm individually detects the presence of smoke in the viewing range of the camera before the decisions of the sub-algorithms are combined by an adaptive weighted majority algorithm. The weights of compound algorithms are re-estimated using an error function in the least mean square algorithm, which is based on the decision of an oracle “security guard” whenever a false alarm occurs.

Guillemant and Vicente16 proposed smoke identification method using a cumulative analysis of the instantaneous motion data. They found that the most efficient data for smoke identification are the velocity distribution in the envelope, whose energy, or average number of instantaneous motion diagnoses per embedded point, is higher than the energy of most other landscape phenomena with the exception of clouds. But for clouds, the standard deviation of velocity distribution is generally lower than for smoke. Thus, their main criterion for smoke detection is based on the analysis of the velocity distribution using a minimum energy threshold and a minimum standard-deviation threshold.

2.2.

Wildfire Flame Detection

Regions of flame indoors generally produce higher luminance and frequency than background regions, as well as an upward motion with irregular fluctuation patterns in the time domain.9,10 In contrast, wildfire has different characteristics from indoor and close-range fires.14

  • Flame regions, at distances far from the camera, seem to move slower in comparison to nearby objects.

  • Wildfire flames, at a large distance detected at night, appear as bright regions and do not carry much color information.

  • Most wildfire flames exhibit a nonperiodic frequency pattern, in contrast to artificial light sources such as flashing lights on vehicles and building lights, which exhibit a perfect periodic pattern.

  • During the initial stages of a fire, the candidate wildfire flame regions should not move outside some predefined bounds within a limited time span, Due to second characteristic, much research is focusing on developing different algorithms for flame detection at night and smoke detection during the day.

Similar to the method in Ref. 14, the algorithm proposed by Gunay et al.17 was comprised of four sub-algorithms: (1) slow-moving video object detection, (2) bright region detection, (3) detection of objects exhibiting periodic motion, and (4) a sub-algorithm interpreting the motion of moving regions in video. Each of these sub-algorithms characterizes an aspect of fire captured at night by a visible-range PTZ camera. The decision functions of sub-algorithms yield their own decisions as confidence values in the range [1,1]R. Sub-algorithm weights are updated according to the least mean square algorithm, which is the most widely used adaptive filtering method. The final classification result, y(x,n), is combined linearly with the decision values from the four sub-algorithms and their weights in the following manner.

Equation (3):

Eq. (3)

y(x,n)=DT(x,n)w(n)=iwi(n)Di(x,n),
where D(x,n)=[D1(x,n)DM(x,n)]T is the vector of the confidence values of the sub-algorithms for the pixel at location x of the input image frame at time step n, and w(x,n)=[w1(x,n)wM(x,n)]T is the current weight vector.

Figure 6 shows three examples of wildfire flame detection results at night.

Fig. 6

Three examples of wildfire flame detection at night: (a) correct alarm for a fire at night, (b) elimination of fire-truck head lights (upper left) and correct alarm for a fire (lower right), and (d) detection results on an actual forest fire at night, taken from Ref. 17.

OE_51_7_070901_f006.png

Ko et al.10 proposed a flame detection method using FFA, which was similar to the method in Ref. 11, with probability density functions based on visual features, thereby providing a systemic approach to handling irregularity in computational systems and the ability to handle continuous spaces by combining the capabilities of automata with fuzzy logic. In general, flame regions have a continuous irregular pattern, therefore, probability density functions are generated for the variation in intensity, wavelet energy, and motion orientation, as shown in Fig. 7, and applied to the FFA. The proposed algorithm has been successfully applied to various fire/nonfire videos, including indoor fires and wildfires taking place during both day and night.

Fig. 7

Two-dimensional probability density graph of the intensity, wavelet energy, and motion orientation. VH is very high, H is high, L is low and VL is very low, taken from Ref. 10.

OE_51_7_070901_f007.png

Celik et al.18 used color features and fuzzy logic to detect wildfire flame. Fuzzy logic was used to replace the heuristic rules and make the classification more robust when effectively discriminating fire and fire-like colored objects. To detect the fire region, the proposed method, measured Pf(x,y) to determine the likelihood that a pixel located at a spatial location belonged to a fire pixel in the YCbCr color image. To evaluate Pf(x,y), a combination of triangular and trapezoidal fuzzy logic membership functions were used to represent the difference between Cr(x,y) and Cb(x,y) and the difference between Y(x,y) and Cb(x,y).

In contrast to wildfire flame detection systems based on CCD cameras, some researchers19,20 have tried to analyze the heat of wildfires using infrared images or visible light images to compensate for the limitations of CCD cameras.

3.

Water-Level Detection for Preventing Floods

As mentioned in the introduction, a lot of work using conventional sensors has been conducted in an effort to forecast the degree of flooding of rivers. Although a pressure sensor is easy to use, it is limited because it must be calibrated and replaced frequently due to possible breakdowns stemming from continuous water pressure. In addition, the sonar sensor does not make contact with the water directly, its lifespan is short, and the returning sonic values are sensitive to the temperature, rain, and snow. Therefore, use of an image sensor to measure the water level is the most recent approach.5

Initial research on water level detection21,22,24 used spatial information derived from image sensors, such as edge direction. Comparatively, new water level detection techniques5,23 exploit the temporal motion information embedded in a sequence to overcome the misdetection problem due to wrong edge detection.

A simplified spatiotemporal water level detection system is shown in Fig. 8. In the first step, the image noise produced by dust on the camera lens, rain, or snow is removed by a smoothing filter. After the spatiotemporal features have been estimated, each spatiotemporal feature is projected onto the y-axis to create a cumulative histogram. During the final step, the water level decision is made using a decision function and the projected features.

Fig. 8

Block diagram of the water level detection procedure.

OE_51_7_070901_f008.png

Udomsiri and Iwahashi21 proposed a new design of a spatial finite impulse response (FIR), filter to automatically detect the water level from a video signal of various river surroundings. This approach employed the addition of frames and a horizontal edge detector to distinguish water regions and land regions. An example of a filtered signal is shown in Fig. 9(c), which uses a three-tap bandpass filter as an edge operator. In this case, the variance of each horizontal line is used as the feature value shown in Fig. 9(d). Thus, the variance of each line in a filtered video frame is used as a feature value. The water level is recognized as the horizontal boundary line between the land and water regions. The feature values in the land region and those in the water region are calculated by setting a temporary boundary. Similarly, the mean values, mL and mW, are calculated for the land region and water region, as well as their variances sL and sW. Based on these values, the ratio defined by Eq. (4),

Eq. (4)

PLPW(mLmW)2PLsL2+PWsW2,
is calculated to determine whether the boundary is correct. PL and PW denote the probability that pixels belong to the land class or water class, respectively. An appropriate water boundary is considered to be the point that maximizes the distance between the numerators and minimizes the denominator. Figure 9 shows the water level detection procedure using the spatial FIR filter.

Fig. 9

Water level detection procedure: (a) input frame, (b) average of frames, (c) filtered image using a three-tap bandpass filter, and (d) the detected water level, taken from Ref. 21.

OE_51_7_070901_f009.png

Park et al.22 introduced a cumulative histogram that emphasized the change of the water surface in sequential images. The histogram showed the cumulative differences between previous images and the current image. However, these differences also appeared in the land region, hence, a bandpass filter was used to remove noise in the cumulative histogram. Finally, this algorithm separated the water and land regions using predefined rules. Figure 10 shows the cumulative histograms and the corresponding pulse histograms for three different situations: calm, night, and day.

In a similar method to Park et al.,22 Yu and Hahn5 measured the water level from images received by the master system which may contain noise caused by various sources. An averaging filter and Gaussian filter were used to reduce the noise. The horizontal edge image was extracted using a Sobel operator and the position of the longest horizontal line in the bridge support column was found by projecting the edge image onto the y-axis as shown in Fig. 11. The water level is calculated by measuring the gap, D, between the reference indicator, YR, and the water limit, YW.

Fig. 10

Cumulative histograms and the corresponding pulse histograms for three different situations: a: calm, b: night, c: day, taken from Ref. 22.

OE_51_7_070901_f010.png

Fig. 11

y-axis profile for detecting the water limit: (a) the input frame ‘T’ signboard is the reference indicator, (b) Gaussian filtered image, (c) horizontal edge image, and (d) y-axis profile, taken from Ref. 5.

OE_51_7_070901_f011.png

Kwak et al.23 proposed a new automatic water-level detection algorithm for flood prevention with the use of a CCD camera and K-means clustering. The captured input video of the river displays the column of a bridge and flowing water. In contrast to the static column region, the water region exhibits slow and fast movement, according to the speed of the water. Using this fact, optical flows are detected using image sequences and a motion vector that is projected onto the Y-axis to create an accumulated motion histogram. From the accumulated motion histogram, a K-means clustering algorithm is executed to distinguish the real water region from the concrete column region, as shown in Fig. 12.

Fig. 12

Examples of water-level detection: (a) detected optical flows of flowing water, (b) K-means clustered result of accumulated motion histogram, taken from Ref. 23.

OE_51_7_070901_f012.png

Chapman24 proposed a line-based, edge detection algorithm to find the water edge in a tube, as shown in Fig. 13. A series of vertical lines were evaluated within the tube to determine the position of the water's edge. The edge point positions were defined by their pixel row and column positions within the image. The equation for the water line was calculated by a linear regression fit to the edge points. The final step in the process was to convert the position of the line from the measurement system used in images, pixel units, to real-world measurement units, feet/meters.

Fig. 13

Example of water level (blue line) detection at night under infrared light, taken from Ref. 24.

OE_51_7_070901_f013.png

4.

Coastal Zone Management and Detecting Tidal Waves

Monitoring shorelines, or changes over time, and tidal waves caused by earthquake or typhoon, are core tasks carried out by coastal engineers for a wide range of research, monitoring, and design applications such as identifying and quantifying shoreline erosion and assessing the performance of coastal protection structures.36 First, to monitor the coastal zone, a conventional surveying method is the use of satellite and aircraft-mounted remote sensing systems such as light detection and ranging (LiDAR). However, these systems are generally limited to use on clear water in order to obtain improved spatial coverage, and the data that they obtain are expensive to manage from the viewpoint of long-term monitoring.25 Fixed video remote sensing systems provide continuous daylight data for periods extending over decades and they are relatively low-cost as compared to other airborne or satellite remote-sensing options.

Temporal pixel difference on the shoreline is one of the most widely used shoreline detection techniques. As shown in Fig. 14, a panoramic view of the entire coastal embayment is acquired automatically from several cameras with repeated mapping of shorelines from time exposure images of different, tidal, water levels. Image analysis techniques, such as region grouping and morphological operation, and decision functions are then applied to the difference image to determine the shoreline position along the shore.

Fig. 14

General procedure for shoreline detection.2736 Images acquired from multiple cameras are used to obtain (a) a panoramic view of the entire coastal embayment, (b) rectification of this merged image to produce real-world coordinates allows the resulting aerial view image to be processed using sophisticated image analysis techniques to determine the exact shoreline position on the shore, marked with a black dashed line in (c).

OE_51_7_070901_f014.png

The CoastView project,25 www.TheCoastViewProject.net, aimed to develop coastal video monitoring systems in support of coastal zone management. The project utilizes Argus video systems that are capable of remotely sensing waves, currents, and beach elevation at high frequency over a scale of several kilometers. The CoastView project is a €2M European project involving 12 partner institutions from six countries, including four national-scale coastal managers from Spain, Italy, the Netherlands, and the UK.

Turner and Anderson26 introduced experience from Australia, using a network of Argus coastal imaging sites, to illustrate and discuss the suite of image-derived, coastal zone, management products, that is now available to the coastal manager. Both qualitative and quantitative information is routinely delivered, via the Internet, ranging from hourly images of the monitored site to weekly summaries of shoreline variability and longer-term beach-width trends. All monitoring program results and data summaries are accessed via a World Wide Web interface, providing real-time delivery direct to the managers’ desktop computers.

Uunk et al.27 presented a fully automated procedure to derive the intertidal beach bathymetry on a daily basis, from video images of low-sloping beaches that are characterized by the intermittent emergence of intertidal bars. Bathymetry data is obtained by automated and repeated mapping of shorelines from time exposure images for different tidal water levels. The developed procedure handles intelligent selection of a shoreline search area and unsupervised quality control of the obtained bathymetry data.

Second, unlike coastal zone management, detection of an overtopping wave, or a high wave, caused by tidal waves or tsunamis, is also an important issue to address in order to prevent natural disaster damage.

Ishimoto et al.28 proposed an automated wave overtopping detection system using image processing. This system automatically detects the optional size of overtopping waves in a video camera scene. One to four continuous, optional, time-interval, still pictures are transmitted immediately after capturing wave overtopping, including just before capture. The optimal time interval after capturing wave overtopping allows this system to detect the highest wave in still video pictures. This system allows the frequency and scale of overtopping waves to be monitored in real time, which can be used to make a decision on traffic closure.

Seki et al.30 proposed an algorithm that automatically detects high waves from images captured by coastal surveillance cameras. This detection is performed directly by background subtraction and measuring wave contour in each image using active contour models from subtracted regions. The wave contour is tracked and is declared an overtopping wave if it exceeds the predefined boundary. This method measures and tracks the wave contour, hence, it is robust to other moving objects and it can be applied for the occurrence prediction of overtopping waves. In experiments using real video sequences, including the approach of a typhoon and tranquil scenes, this method was found to be 96 percent correct with 114 high waves while no misdetections occurred, even in tranquil conditions.

The ITS Handbook of Japan29 contains detailed descriptions of an overtopping wave monitoring system installed on National Highway No. 42 in Japan. The accurate prediction and swift detection of overtopping waves is essential to ensure safe driving during the early stage of overtopping waves and to allow traffic restrictions to be issued without delay when overtopping becomes serious. Thus, a system is used, in this district, to automatically detect overtopping waves, similar to the method described in Ref. 28, where images captured with CCTV cameras are processed while step-type wave gauges along the highway help improve the accuracy of overtopping detection. In addition, overtopping waves are predicted using wave data obtained offshore from Nishimukai so that traffic restrictions can be issued at the proper time. An outline of the overtopping wave monitoring system installed on National Highway No. 42 in Japan is shown in Fig. 15.

Fig. 15

Outline of the overtopping wave monitoring system installed on National Highway No. 42 in Japan, taken from Ref. 29.

OE_51_7_070901_f015.png

5.

Landslide Detection

Traditionally landslide areas were manually measured from aerial stereo pair photography, which is a very time-consuming and tedious task. For this reason, a variety of approaches have been proposed to detect landslides automatically using LiDAR image and computer vision techniques. The most common approaches are difference region detection,3132 comparing LiDAR images before the landslide and after the landslide, based on pixel units or region units, as shown in Fig. 16. The reference image is saved in the first step and subsequent images from the same location are obtained by automated and repeated mapping of topographical features. The modified areas are represented by thresholding based on local point similarity, while post-processing, such as connected component labeling or morphological operations, is used to detect the large connected regions. A landslide decision function is applied during the final step, along with some feature descriptors of candidate regions.

Fig. 16

Block diagram of the landslide detection procedure.

OE_51_7_070901_f016.png

Rau et al.31 proposed a pixel-based image difference method, using a logical intersection operation, to detect the main changed area based on spectral and surface characteristics, such as the terrain slope, surface texture, and greenness. Among these, greenness is an important index to discriminate the bare soil from vegetation since the greenness value changes from high to low in the landslide region. They compare the greenness index map, before and after the landslide, using a logical intersection operation to identify the change as a landslide region. Figure 17(a) shows the color ortho-images with landslide and the greenness index map as shown in Fig. 17(b).

Fig. 17

Examples of landslides detection: (a) color ortho-images with landslide ground truth, (b) greenness index map, taken from Ref. 31.

OE_51_7_070901_f017.png

S. K. Bejo et al.33 used a point similarity measure based on mutual information to detect the most significant changed regions. The point similarities,SMAB, are defined for each pixel at coordinate c:

SMABlog[PAB(pCA,pcB)PA(pA)PB(pB)],
where PA(pA) is the normalized histogram of gray values in the reference image, PB(pB) is the normalized histogram of the sensed image, and PAB(piA,piB) is the normalized joint histogram of the gray values, piA and piB, that correspond to the same pixel i. The changed areas are represented by thresholding local point similarities. Then, connected component labeling is used to detect the large connected regions, which are likely to be landslide regions. It is a very simple and fast method to detect landslides, but it cannot allow the distinction between a landslide and any other change on the surface of the earth that causes changes in the appearance of the image.

Martha and Kerle34 proposed an object segmentation method to detect the difference region using multiple segmentation strategies. The segmentation method, with a single set of parameters, is not able to detect the landslide regions due to the complex shape and size of landslides. Instead of one object level pertaining to a single parameter, multiple object levels, with a set of optimum parameters, are used to detect landslides of variable sizes with high-resolution satellite images. The proposed method does not require a priori knowledge to optimize segmentation, which makes it typically suitable for a rapid response to a disastrous landslide.

However, difference region detection is very sensitive to small illumination changes. In order to solve this problem, Hervas and Rosin32 combine the image differencing method and a genetic programming algorithm for landslide detection. To extract the features from the changed pixels, the difference image is computed, before and after the landslide reactivation, and mathematical morphology operations are applied. Then, genetic programming is used to distinguish the “change” and “no-change” classes in changed pixels from optical remote sensing data.

6.

Conclusions and Future Directions

The world is currently experiencing various natural disasters such as storms, wildfires, droughts, floods, landslides, and tsunamis due to the effects of global warming. As natural disasters caused huge losses of both life and property, early warnings of disasters are crucial to reduce the potentially extensive casualties and property damage when disaster threatens. In this paper, we introduced recent developments in research on natural disaster detection with a focus on various computer vision techniques. The existing approaches were summarized for wildfire detection, water level detection, coastal zone management, and landslide detection. Despite the fact that research related to vision-based natural disaster detection is still in its preliminary stages, there has been remarkable progress. The advantages of vision-based disaster warning systems are they have a lower equipment cost, have the capacity to monitor a larger area, and provide additional information such as the location, size, and degree of disaster, without the need for visiting the location.

However, there are still a few problems in the detection and accurate forecasting of natural disasters, due to the limitation of gathering information from a CCD camera. The details of these problems are as follows:

  • CCD cameras cannot capture heat or humidity information which can provide reliable clues that distinguish real wildfires from false positives such as light, clouds, and fog.

  • CCD cameras cannot acquire a clear image at night without special equipment.

  • Computer vision has limitations in terms of the removal of severe noise caused by heavy rain or snow.

  • Computer vision is limited in terms of camera shake compensation.

To overcome the shortcomings in related research, it is clear that further research in the areas of pre-processing for removing noises and ego-motion compensation, feature extraction, ensemble methods with other conventional sensors, classification methods, and warning representation methods are required to complete the final goal of natural disaster detection and forecasting.

This review clearly shows that great advances have been made in vision-based disaster warning techniques, but many challenges still remain in achieving more accurate disaster warning techniques in diverse environments.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (2011-0007406)

References

1. 

K. Suzuki, Good Practices 2007 Supplement, Asian Disaster Reduction Center (ADRC), Japan (2007). Google Scholar

2. 

B. C. KoK. H. JungJ. Y. Nam, “Early fire detection algorithm based on irregular patterns of flames and hierarchical Bayesian Networks,” Fire Safety J., 45 (2), 262 –270 (2010). http://dx.doi.org/10.1016/j.firesaf.2010.04.001 FSJODZ 0379-7112 Google Scholar

4. 

B. Marshall, How Smoke Detectors Work, (2010). HowStuffWorks.com Google Scholar

5. 

J. YuH. Hahn, “Remote detection and monitoring of a water level using narrow band channel,” J. Inform. Sci. Eng., 26 (1), 71 –82 (2010). JINEEY Google Scholar

6. 

R. A. Lawson, “Tsunami detection systems for international requirements,” in Oceans, 1 –7 (2007). Google Scholar

7. 

G. M. BeltramiM. D. RisioP. D. Girolamo, The Tsunami Threat—Research and Technology, InTech open access publisher(2011). Google Scholar

8. 

K. Higuchiet al., “Application of new landslide monitoring technique using optical fiber sensor at Takisaka landslide,” in 1st North American Landslide Conf., 1 –9 (2007). Google Scholar

9. 

B. C. KoH. J. HwangJ. Y. Nam, “Nonparametric membership functions and fuzzy logic for vision sensor-based flame detection,” J. Opt. Eng., 49 (12), 127202 (2010). http://dx.doi.org/10.1117/1.3520069 OPENEI 0892-354X Google Scholar

10. 

B. C. KoS. J. HamJ. Y. Nam, “Modeling and formalization of fuzzy finite automata for detection of irregular fire flames,” IEEE Trans. Circuits Syst. Video Technol., 21 (12), 1903 –1912 (2011). http://dx.doi.org/10.1109/TCSVT.2011.2157190 Google Scholar

11. 

B. C. KoJ. Y. KwakJ. Y. Nam, “Wildfire smoke detection using temporal-spatial features and random forest classifiers,” Opt. Eng., 51 (1), 017208 (2012). http://dx.doi.org/10.1117/1.OE.51.1.017208 OPENEI 0892-354X Google Scholar

12. 

S. J. HamB. C. KoJ. Y. Nam, “Vision-based forest smoke detection using analyzing of temporal patterns of smoke and their probability models,” Proc. SPIE, 7877 78770A (2011). http://dx.doi.org/10.1117/12.871995 Google Scholar

13. 

Y. H. HabibogluO. GunayE. Cetin, “Real-time wildfire detection using correlation descriptors,” in 19th European Signal Processing Conf., 894 –898 (2011). Google Scholar

14. 

B. U. ToreyinA. E. Cetin, “Wildfire detection using LMS based active learning,” in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 1461 –1464 (2009). Google Scholar

15. 

J. GubbiS. MarusicaM. Palaniswamia, “Smoke detection in video using wavelets and support vector machines,” Fire Safety J., 44 (8), 1110 –1115 (2009). http://dx.doi.org/10.1016/j.firesaf.2009.08.003 FSJODZ 0379-7112 Google Scholar

16. 

P. GuillemantJ. Vicente, “Real-time identification of smoke images by clustering motions on a fractal curve with a temporal embedding method,” Opt. Eng., 40 (4), 554 –563 (2001). http://dx.doi.org/10.1117/1.1355254 OPENEI 0892-354X Google Scholar

17. 

O. Gunayet al., “Video based wild fire detection at night,” Fire Safety J., 44 (6), 860 –868 (2009). http://dx.doi.org/10.1016/j.firesaf.2009.04.003 FSJODZ 0379-7112 Google Scholar

18. 

T. CelikH. OzkaramanliH. Demirel, “Fire and smoke detection without sensors: image processing based approach,” in European Signal Processing. Conference, 1794 –1798 (2007). Google Scholar

19. 

J. Liet al., “Technique for automatic forest fire surveillance using visible light image,” in Int. Conf. Geoscience and Remote Sensing Symposium, 31 –35 (2005). Google Scholar

20. 

I. Boschet al., “Infrared image processing and its application to forest fire surveillance,” in IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, 283 –288 (2007). Google Scholar

21. 

S. UdomsiriM. Iwahashi, “Design of FIR filter for water level detection,” World Acad. Sci. Eng. Technol., 48 47 –52 (2008). Google Scholar

22. 

S. Parket al., “The water level detection algorithm using the accumulated histogram with band pass filter,” Acad. Sci. Eng. Technol., 56 193 –197 (2009). Google Scholar

23. 

J. Y. KwakB. C. KoJ. Y. Nam, “Automatic water-level detection using CCD camera and k-mean clustering,” J. Kor. Soc. Image Sci. Technol., 17 (9), 1 –8 (2011). Google Scholar

24. 

K. W. Chapman, Camera based water level detection, (2011). Google Scholar

25. 

H. Burcharth, Developing Coastal Video Systems in Support of Coastal Zone Management (2005). Google Scholar

26. 

I. TurnerD. Anderson, “Web-based and ‘real-time’ beach management system,” Coast. Eng., 54 (6–7), 555 –565 (2007). http://dx.doi.org/10.1016/j.coastaleng.2007.01.002 COENDE 0378-3839 Google Scholar

27. 

L. UunkK. M. WijnbergR. Morelissen, “Automated mapping of the intertidal beach bathymetry from video images,” Coast. Eng., 57 (4), 461 –469 (2010). http://dx.doi.org/10.1016/j.coastaleng.2009.12.002 COENDE 0378-3839 Google Scholar

28. 

K. IshimotoT. ChibaY. Kajiya, “Wave overtopping detection by imge processing,” The second World Congress on Intelligent Transport Systems, 515 –518 (1995). Google Scholar

29. 

Overtopping Wave Monitoring system, (2001). Google Scholar

30. 

M. SekiH. TaniguchiM. Hashimoto, “Overtopping wave detection based on wave contour measurement,” IEEJ Trans. Electron. Inform. Syst., 127 (4), 599 –604 (2007). http://dx.doi.org/10.1541/ieejeiss.127.599 Google Scholar

31. 

J. Y. RauY. C. ShaoL.-C. Chen, “Landslides detection using aerial ortho-images and LiDAR data,” in Proc. of Asian Assoc. Remote Sens., 1 –6 (2006). Google Scholar

32. 

J. HervasP. L. Rosin, “Image thresholding for landslide detection by genetic programming,” in Proc. of the First International Workshop on Multitemporal Remote Sensing Images, 65 –72 (2002). Google Scholar

33. 

S. K. BejoM. PetrouA. Ganas, “Local similarity measure for landslide detection and identification in comparison with the image differencing method,” Int. J. Remote Sensing, 31 (23), 6033 –6045 (2010). http://dx.doi.org/10.1080/01431160903376365 IJSEDK 0143-1161 Google Scholar

34. 

T. R. MarthaN. Kerle, “Segment optimisation for object-based landslide detection,” Int. Arch. Photogramm. Remote Sens. Spatial Inform. Sci., XXXVIII-4/C7 (1), 1 –6 (2010). Google Scholar

35. 

B. C. Koet al., “Current trend in natural disaster warning systems based on computer vision techniques,” in First Asian Conf. on Pattern Recognition, 1 –5 (2011). Google Scholar

36. 

M. Davidsonet al., “The coastview project: developing video-derived coastal state Indicators in support of coastal zone management,” Coast. Eng., 54 (6–7), 463 –475 (2007). http://dx.doi.org/10.1016/j.coastaleng.2007.01.007 COENDE 0378-3839 Google Scholar

Biography

OE_51_7_070901_d001.png

ByoungChul Ko received his BS degree from Kyonggi University, Korea, in 1998, and his MS and PhD degrees in computer science from Yonsei University, Korea, in 2000 and 2004. He was a senior researcher of Samsung Electronics from 2004 through 2005. He is currently an associate professor in the Department of Computer Engineering, Keimyung University, Daegu, Korea. His research interesting includes content-based image retrieval, fire detection and robot vision.

OE_51_7_070901_d002.png

Sooyeong Kwak received her PhD degree from Yonsei University, Seoul, Korea in 2010. She was a senior researcher of Samsung Electronics from 2010 through 2011. She is currently an assistant professor in the Department of Electronic and Control Engineering at Hanbat National University, Daejeon, Korea. Her research interests include robot vision, video analysis, and intelligent surveillance system.

© 2012 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2012/$25.00 © 2012 SPIE
ByoungChul Ko and Sooyeong Kwak "Survey of computer vision-based natural disaster warning systems," Optical Engineering 51(7), 070901 (28 June 2012). https://doi.org/10.1117/1.OE.51.7.070901
Published: 28 June 2012
Lens.org Logo
CITATIONS
Cited by 32 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Natural disasters

Sensors

Landslide (networking)

Computing systems

Computer vision technology

Machine vision

Video

RELATED CONTENT

Smoke detection in compressed video
Proceedings of SPIE (September 17 2018)
Shadow detection using 2D cepstrum
Proceedings of SPIE (May 04 2009)
Vision-based natural feature tracking for augmented reality
Proceedings of SPIE (September 05 2002)

Back to Top