PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A new endmember extraction method has been developed that is based on a convex cone model for representing vector data. The endmembers are selected directly from the data set. The algorithm for finding the endmembers is sequential: the convex cone model starts with a single endmember and increases incrementally in dimension. Abundance maps are simultaneously generated and updated at each step. A new endmember is identified based on the angle it makes with the existing cone. The data vector making the maximum angle with the existing cone is chosen as the next endmember to add to enlarge the endmember set. The algorithm updates the abundances of previous endmembers and ensures that the abundances of previous and current endmembers remain positive or zero. The algorithm terminates when all of the data vectors are within the convex cone, to some tolerance. The method offers advantages for hyperspectral data sets where high correlation among channels and pixels can impair un-mixing by standard techniques. The method can also be applied as a band-selection tool, finding end-images that are unique and forming a convex cone for modeling the remaining hyperspectral channels. The method is described and applied to hyperspectral data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multiple simplex endmember extraction method has been developed. Unlike convex methods that rely on a single simplex, the number of endmembers is not restricted by the number of linearly independent spectral channels. The endmembers are identified as the extreme points in the data set. The algorithm for finding the endmembers can
simultaneously find endmember abundance maps. Multispectral and hyperspectral scenes can be complex and contain many materials under a variety of illumination and environmental conditions, but individual pixels typically contain only a few materials in a small subset of the illumination and environmental conditions which exist in the scene. This forms the physical basis for the approach that restricts the number of endmembers that combine to model a single pixel. No restriction is placed on the total number of endmembers, however. The algorithm for finding the endmembers and their abundances maps is sequential. Extreme points are identified based on the angle they make with the existing set. The point making the maximum angle with the existing set is chosen as the next endmember to add to enlarge the endmember set. The maximum number of endmembers that are allowed to be in a subset model for individual pixels is controlled by an input parameter. The subset selection algorithm is sequential and takes place simultaneously with the overall endmember extraction. The algorithm updates the abundances of previous endmembers and ensures that the abundances of previous and current endmembers remain positive or zero. The method offers advantages in multispectral data sets where the limited number of channels impairs material un-mixing by standard techniques. A description of the method is presented herein and applied to real and synthetic hyperspectral and multispectral data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
N-FINDR, an automated end-member detection and unmixing algorithm, was first proposed four years ago. Since then, the algorithm has been used successfully in a number of situations. The apparent success of the N-FINDR algorithm is a strong motivator for a complete review of its approach, from its assumptions to its implementation details. This paper reviews the approach used in N-FINDR, and makes a theoretical argument that the algorithm works. The algorithm can be proven to work perfectly on theoretically perfect data. Moreover, N-FINDR can be shown to have good (although imperfect) convergence properties with non-ideal data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The estimation of abundance coefficients, or unmixing, of hyperspectral data is important in a wide variety of applications. Assuming the major constituents, or endmembers, of a scene are known, the unmixing problem is relatively straightforward and easily solved using least-squares techniques. What is less well understood, however, is how error in the original data affects the final solution. This error generally takes two forms: measurement error introduced by the sensor, and modeling error that arises from the assumption of linear mixing. In this paper, we investigate how the unmixing process propagates error that arises from sensor noise. In particular, we derive statistical bounds on how much error can be expected in the estimation of abundance coefficients due to measurement error. We also discuss how this error may affect post-processing algorithms such as subpixel target detection, and consider ways to validate the noise model through the use of residuals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Well-chosen background models are critical to accurately predict the performance of hyperspectral detection and classification algorithms and to evaluate the effects on system performance of variation in environmental or sensor parameters. Such models also have implications for the derivation of optimal algorithms. First-principal physical models and statistical models have been developed for these purposes. However, in many circumstances these models may not accurately represent hyperspectral data that are complicated by intra-class variability and subpixel mixing of materials as well as atmospheric, illumination, temperature (in the emissive regime) and sensor effects. In this paper we propose a statistical representation of hyperspectral data defined by class parameters and an abundance probability distribution. Various representations of the probability distribution function of the abundance values are developed and compared with data to determine if the estimated abundance distributions and intra-class variation explain the observed heavy tails in the data. The consequences of the Gaussian endmembers of the normal compositional model violating the non-negativity constraint are also investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging provides the potential to derive sub-pixel material abundances. This has significant utility in the detection of sub-pixel targets or targets concealed under canopy. The linear mixture model describes spectral data in terms of a basis set of pure material spectra or endmembers. The success of such a model is dependent on the choice and number of endmembers used and the unmixing process. Endmember spectra may come from field or laboratory measurements, however, differences between sensors and changes in environmental conditions may mean that the measurement is not representative of the material as found in the scene. Alternatively, a number of algorithms exist to select spectra from the data directly, but these assume pure examples of the complete set of materials exist within the imagery. In either case, the chosen set of endmembers may not optimally describe the data in a linear mixing sense. In this paper some new methods for endmember selection are presented. These are evaluated on hyperspectral imagery and the results compared with those of a well-known automatic selection technique. Finally, an improved unmixing architecture is proposed which is self-consistent in terms of endmember selection and the unmixing process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For hyperspectral remote sensing, the physics-based transformation connecting two multivariate sets of spectral radiance data of the same scene collected at two disparate times is approximately linear (plus an offset). Generally, the covariance structures of two such data sets provide partial information about any linear transformation connecting them. The remaining unknown degrees of freedom of the transformation must be deduced from other statistics, or from a knowledge of the underlying phenomenology. Among all the possible transformations consistent with measured pairs of hyperspectral covariance structures, a particularly simple and accurate one has been found. This "rotation free" flavor of "Covariance Equalization" (CE) has led to a simplified signal processing architecture that has been implemented in a real time VNIR hyperspectral target detection system. This paper describes that architecture, presents detection performance results, and introduces a new algorithm for long-interval change detection, Matched Change Detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy of subpixel detection in hyperspectral imagery degrades with approximation error arising from cluttered backgrounds and complex target objects. In this paper, we develop a non-parametric generalized likelihood ratio (NGLR) statistic for the subpixel detection of 3-D objects that is invariant to the illumination and atmospheric conditions. We construct the target and background subspaces from target models and the image data. The NGLR is established by estimating the conditional probability densities for the background and target hypotheses using subspace residuals. We use DIRSIG to evaluate the performance of NGLR for detecting subpixel 3-D objects composed of multiple materials in varying illumination and atmospheric conditions. NGLR provides accurate detection results that are invariant to the environmental conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on comparing three basis-vector selection techniques as applied to target detection in hyperspectral imagery. The basis-vector selection methods tested were the singular value decomposition (SVD), pixel purity index (PPI), and a newly developed approach called the maximum distance (MaxD) method. Target spaces were created using an illumination invariant technique, while the background space was generated from AVIRIS hyperspectral imagery. All three selection techniques were applied (in various combinations) to target as well as background spaces so as to generate dimensionally-reduced subspaces. Both target and background subspaces were described by linear subspace models (i.e., structured models). Generated basis vectors were then implemented in a generalized likelihood ratio (GLR) detector. False alarm rates (FAR) were tabulated along with a new summary metric called the
average false alarm rate (AFAR). Some additional summary metrics are also introduced. Impact of the number of basis vectors in the target and background subspaces on detector performance was also investigated. For the given AVIRIS data set, the MaxD method as applied to the background subspace outperformed the other two methods tested (SVD and PPI).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of military and civilian targets from airborne platforms using hyperspectral imaging (HSI) sensors is of great interest. Relative to multispectral sensing, hyperspectral sensing can increase the detectability of targets by exploiting finer detail in spectral signatures. A multitude of adaptive detection algorithms have appeared in the literature or have found their way into software packages and end-user systems. The most widely known among them is the linear matched filter. However, despite its popularity, the fact that the matched filter is used under conditions that deviate from the implicit optimality assumptions has not been investigated. The optimum linear matched filter assumes that the target and background spectra follow normal distributions with identical covariance matrices. However, in HSI data, the fundamental assumption about equal covariance matrices is likely violated. An analytic matched filter performance expression assuming unequal covariance matrices is derived, which also allows performance estimates under incomplete class descriptions (only a few principal components are known). Examples based on HSI data show that theoretical performance can be overly optimistic compared to actual performance. Example performance estimates based on incomplete class descriptions are shown to be more realistic because of the elongation of typical HSI data distribution hyperellipses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The timely analysis and exploitation of data from multispectral/hyperspectral sensors from remote sensing platforms can be a daunting task. One such sensor platform is the Multispectral Thermal Imager (MTI), which provides a highly informative source of remote sensing data. In a typical exploitation scenario, an image analyst may need to consistently locate regions/objects of interest from a stream of imagery in a timely manner. Many available image analysis/segmentation techniques are often either slow, not robust to spectral variabilities from view to view or within a spectrally similar region, or may require a significant amount of user intervention including a priori knowledge to achieve a segmentation corresponding to self-similar regions within the data. This paper discusses an unsupervised segmentation approach that exploits the gross spectral shape of MTI data. We describe a nonparametric unsupervised approach based on a graph theoretic representation of the data. The goal of this approach is to perform coarse level segmentation that can stand alone or as a potential precursor to other image analysis tools. In comparison to previous techniques, the key characteristics of this approach are in its simplicity, speed, and consistency. Most importantly it requires few user inputs and determines the number of spectral clusters, their overall size, and subsequent pixel assignment directly from the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scene classification using multiple views for an urban/rural scene was studied. Visible/Near Infrared (VNIR) imagery was acquired over Fresno, Ca, on July 13, 2003 using the 4-color Quickbird multi-spectral imager. Four scenes were acquired at view angles of +60, nadir, -45, and -60 degrees on a descending pass. Bi-directional reflectance function (BDRF) effects were present. Analysis was conducted at 10-meter spatial resolution. Variations in reflectance with view angle aided in the extraction of surface characteristics, and allowed for improved classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A classification scheme incorporating spectral, textural, and contextual information is detailed in this paper. The gray level co-occurrence matrix (GLCM) is calculated to generate texture features. Those features are then subjected to a selection process for joining with spectral data in order to evaluate their discrimination capability in classification performance. The classification result is further enhanced with contexture in terms of a refined Markov random field (MRF) model. Multiscale edge features are derived to overcome the bias generally contributed by the presence of edge pixels during the MRF classification process. The smooth weighting parameter for the refined MRF model is chosen based on the probability histogram analysis of those edge pixels. The maximum a posterior margin (MPM) algorithm is used to search the solution. The joining of texture with spectral data produces a significant enhancement in classification accuracy. The refined MRF-model with a soft version line process, in comparison with the traditional MRF model, successfully restricted the commonly found over-smoothed result, and simultaneously improved the classification accuracy and visual interpretation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper evaluates the performance of 5 previously presented in the literature cluster validity indices for the Fuzzy C-Means (FCM) clustering algorithm. The first two indices, the Fuzzy Partition Coefficient (PC), Fuzzy Partition Entropy Coefficient (PEC) select the number of clusters for which the fuzzy partition is more “crisp-like” or less fuzzy. The other three indices are the Fuzzy Davies-Bouldin Index (FDB), Xie-Beni Index (XB), and the Index I (I) choose the number of clusters which maximizes the inter-cluster separation and minimizes the within cluster scatter. A modification to these three indices is proposed based on the Bhattacharyya distance between clusters. The results show that this modification improves upon the performance of Index I. On the data sets presented on this paper the modifications of indices FDB and XB performed adequately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral or hyperspectral image processing has been studied as a possible approach to automatic target recognition (ATR). Hundreds of spectral bands may provide high data redundancy, compensating the low contrast in medium wavelength infrared (MWIR) and long wavelength infrared (LWIR) images. Thus, the combination of spectral (image intensity) and spatial (geometric feature) information analysis could produce a substantial improvement. Active contours provide segments with continuous boundaries, while edge detectors based on local filtering often provide discontinuous boundaries. The segmentation by active contours depends on geometric feature of the object as well as image intensity. However, the application of active contours to multispectral images has been limited to the cases of simply textured images with low number of frames. This paper presents a supervised active contour model, which is applicable to vector-valued images with non-homogeneous regions and high number of frames. In the training stage, histogram models of target classes are estimated from sample vector-pixels. In the test stage, contours are evolved based on two different metrics: the histogram models of the corresponding segments and the histogram models estimated from sample target vector-pixels. The proposed segmentation method integrates segmentation and model-based pattern matching using supervised segmentation and multi-phase active contour model, while traditional methods apply pattern matching only after the segmentation. The proposed algorithm is implemented with both synthetic and real multispectral images, and shows desirable segmentation and classification results even in images with non-homogeneous regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of pixels in hyperspectral imagery is often made more challenging by the availability of only small numbers of samples within training sets. Indeed, it is often the case that the number of training samples per class is smaller, sometimes considerably smaller, than the dimensionality of the problem. Various techniques may be used to mitigate this problem, with regularized discriminant analysis being one method, and schemes which select subspaces of the original problem being another. This paper is concerned with the latter class of approaches, which effectively make the dimensionality of the problem sufficiently small that conventional statistical pattern recognition techniques may be applied. The paper compares classification results produced using three schemes that can tolerate very small training sets. The first is a conventional feature subset selection method using information from scatter matrices to choose suitable features. The second approach uses the random subspace method (RSM), an ensemble classification technique. This method builds many 'basis' classifiers, each using a different randomly selected subspace of the original problem. The classifications produced by the basis classifiers are merged through voting to generate the final output. The final method also builds an ensemble of classifiers, but uses a smaller number to span the feature space in a deterministic way. Again voting is used to merge the individual classifier outputs. In this paper the three feature selection methods are used in conjunction with a variant of the piecewise quadratic classifier. This classifier type is known to produce good results for hyperspectral pixel classification when the training sample sizes are large. The data examined in the paper is the well-known AVIRIS Indian Pines image, a largely agricultural scene containing some difficult to separate classes. Removal of absorption bands has reduced the dimensionality of the data to 200. A two-class classification problem is examined in detail to determine the characteristic performance of the classifiers. In addition, more realistic 7, 13 and 17-class problems are also studied. Results are calculated for a range of training set sizes and a range of feature subset sizes for each classifier type. Where the training set sizes are large, results produced using the selected feature set and a single classifier outperform the ensemble approaches, and tend to continue to improve as the number of features is increased. For the critical per-class sample size, of the order of the dimensionality of the problem, results produced using the selected feature set outperform the random subspace method for all but the largest subspace sizes attempted. For the smaller training samples the best performance is returned by the random subspace method, with the alternative ensemble approach producing competitive results for a smaller range of subspace sizes. The limited performance of the standard feature selection approach for very small samples is a consequence of the poor estimation of the scatter matrices. This, in turn, causes the best features to be missed from the selection. The ensemble approaches used here do not rely on these estimates, and the high degree of correlation between neighboring features in hyperspectral data allow a large number of 'reasonable' classifiers to be produced. The combination of these classifiers is capable of producing a robust output even in very small sample cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-, Hyper-, and Ultra-Spectral Sensor Characterization and Calibration I
The COMPact Airborne Spectral Sensor (COMPASS) hyperspectral imager (HSI) developed at the Army Night Vision and Electronic Sensors Directorate (NVESD) operates in the solar reflective region. The fundamental advance of the COMPASS instrument is the ability to capture 400nm to 2350nm on a single focal plane, eliminating boresighting and co-registration issues characteristic of dual FPA instruments for visible and SWIR regions. This paper presents a calibration procedure for COMPASS including spectral band profiles and radiometric calibration. These procedures expand on successful calibration procedures used for the Night Vision Infrared Spectrometer (NVIS) system. A high-resolution monochromator was used to map the band center and bandwidth profiles across the FPA with an accuracy goal of ±0.5nm using several different illumination configurations. Although optical distortions are below previous measurement capabilities, accurate band profiles provide additional data to map potential distortions within the system. Radiometric calibration was performed with a NIST-traceable flood source. Test results are presented showing a well-behaved system with an average spectral bandwidth of 8.0nm ±0.5nm over the instrument spectral range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Compact Airborne Spectral Sensor (COMPASS) has been flying for over a year and has gathered data in support of a variety of missions. While COMPASS is an array imaging spectrometer, the quality of the spectrometer optics and the alignment of the instrument during assembly have removed many of the sources of error often present in array imaging spectrometers, such as spectral band mis-registration, smile and keystone. Since COMPASS has begun flying, we have been studying new procedures for improving the calibration of the COMPASS sensor and array imaging spectrometers, in general. The use of the on-board calibration sources was compared to using a combination of on-board sources and a scene average, and also compared to using laboratory calibration sources. In addition, different methods for finding and removing bad detectors were investigated. The coupling of the bad detector replacement procedure with the flatfielding was also studied. We have found that bracketing the light levels in the scene is the key to reducing the effect of bad detectors. An effective method of bracketing the scene is to use the scene average for each detector as the white and the on-board dark. Alternative methods using multiple white sources are also attractive. Several examples from collected scene data will be presented and evaluated in terms of image quality in particular bands and Principal Components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multispectral Thermal Imager Satellite (MTI), launched on March 12, 2000, is a multispectral pushbroom system that acquires 15 unique spectral bands of data from 0.45-10.7 microns, with resolutions of 5 m for the visible bands and 20 m for the infrared. Scene data are collected on three separate sensor chip assemblies (SCAs) mounted on the focal plane. The process of image registration for MTI satellite imagery therefore requires two separate steps: (1) the multispectral data collected by each SCA must be coregistered and (2) the SCAs must be registered with respect to each other. An automated algorithm was developed to register the MTI imagery. This algorithm performs a phase correlation on edge-maps generated from paired bands of data and then spatial-filters the result to calculate the relative shifts between bands. The process is repeated on every combination of band pairs to generate a vector of coregistration results for each SCA. The three SCAs are then registered to each other using a similar process operating on just one spectral band. The resulting registration values are used to produce a linearly shifted un-resampled coregistered image cube. This study shows the results of 791 image registration attempts using the EdgeReg registration code and compares them to a perfect reference data set of the same images registered manually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A procedure has been developed to measure the spatial mis-registration of the bands of imaging spectrometers using data acquired by the sensor in flight. This is done for each across-track pixel and for all bands, thus allowing the measurement of the instrument's 'keystone' and related inter-band spatial shifts. The procedure uses spatial features present in the scene. The inter-band spatial relationship determinations are made by correlating these features as detected by the various bands. Measurements have been made for a number of instruments including the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), Hyperion, Compact Airborne Spectrographic Imager (casi), SWIR (Short Wave Infra-Red) Full Spectrum Imager (SFSI), and Aurora. The measurements on AVIRIS data were performed as a test of the procedure; since AVIRIS is a whisk-broom scanner it is expected to be free of keystone. The airborne Aurora, casi, and SFSI and the satellite sensor Hyperion are all pushbroom instruments, exhibiting varying degrees of keystone. The potential impact of keystone upon spectral similarity measures is examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is a collaborative effort between the US EPA's Technology Applications and Research & Development groups to generate commercial interest in the development of cost effective sensors appropriate for the requirements of organizations in the CIVIL sector chartered with providing emergency first response support. The US EPA Region-7 Technology Applications Group maintains the Airborne Spectral Photometric Environmental Collection Technology (ASPECT) System. This system provides the US EPA with operational 24 hour/seven days a week emergency response remote chemical detection capability. Data collected by the ASPECT system along with the first responder requirements will be encapsulated in a manner suitable for guiding the efforts of commercial sensor system manufactures (e.g., effluents of interest, bounding concentrations/abundances, bounding environmental background parameters, sensor radiometric performance requirements for high-confidence response/action, operational readiness timelines, etc.). This paper is intended to provide the requirements, initiate and guide the synthesis process for sensor(s) and instrument packages providing sufficient area coverage, spectral resolution, and sensitivity to detect, selectivity to identify, image, and map hazardous chemical plumes. It is believed this effort will facilitate cost effective and timely state of the art sensor/system technology development suitable for CIVIL emergency response needs in compact automated packages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The movements, structure, and dynamics of a heated stack plume are revealed by thermocouple and spectrometer measurements. A three-dimensional thermocouple array provides temperature data at two-second intervals. An animated display allows the temperature and position variations of a plume to be visualized, and demonstrates that the plume parameters have significant temporal variation at positions more than a few feet from the stack. Plots of temperature vs. downstream position show the transition from the "near field" to the "far field" regime. For a sideways-directed momentum plume, the temperature varies as the reciprocal of the downstream position. These results are consistent with published data and with theoretical expectations. Spectrometer data suggest that the shape of the 4.26-micron CO2 transition depends on radial position in the plume. Spectra taken near the plume edge are relatively flat-topped, whereas measurements taken near the centerline show a line reversal due to absorption. These results are consistent with a plume structure involving a hot, optically thick core surrounded by an envelope of cooler gas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Results are reported from a continuing program of research into the physics and spectroscopy of heated stack plumes. Simultaneous thermocouple and spectrometer measurements are used to study sideways-directed plumes from an internal combustion engine and a propane-burning plume generator. A previously-reported result, that the ratio of optically thin signals from two CO2 transitions can be used to determine plume temperature, is confirmed by comparison of thermocouple and spectrometer measurements over a wide range of temperatures. The basic physics of molecular emission and absorption of radiation is discussed and is used to calibrate the relationship between the spectroscopic ratio and plume temperature. The result is a spectroscopic plume temperature diagnostic that contains no adjustable parameters, and can be calibrated by use of published absorption spectra. Data relating to the accuracy of the technique are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing using midwave and longwave spectroscopy has been shown to be capable of detecting gaseous effluents from a stack plume release. In general, measurements have been made through the plume cross-section. This paper discusses experiments and measurements conducted to examine the relative merits of viewing the plume's cross-section or viewing the plume along the axis of the plume flow. While viewing along the plume's flow axis increases the path length, additional factors such as wind variance and the effects of optically thick cells may begin to appear.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work examines the process of detecting and quantifying volcanic SO2 plumes using the Airborne Hyperspectral Infrared Imager (AHI) developed by the University of Hawaii. AHI was flown over Pu'u'O'o Vent of Kilauea Volcano in Hawaii to collect data on SO2 plumes. AHI is a LWIR pushbroom imager sensitive to the 7.5 - 11.5 μ region. Spectral analysis and mapping tools were used to identify and classify the SO2 plume in both radiance and emissive space. MODTRAN was used to model the radiance observed by the sensor as it looked to the ground through an SO2 plume. A spectral library of radiance profiles with varying ground surface temperatures and SO2 concentrations was developed, and the AHI data fitted to the varying model profiles. Reasonable values of SO2 emission were obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Phenomenology, Measurements, and Experimentation
The scan mechanism of the HyperSpecTIR hyperspectral instrument has been modified to allow BRDF measurements from an airborne platform. The HyperSpecTIR is a flexible, airborne hyperspectral imager capable of on-the-fly programmability. Such measurements afford the opportunity to study geometric and spectral properties of natural scenes such as fields and canopies, as well as man-made substances such as composite materials and paints. As a proof of concept study for the BRDF measurement method, data was collected during flight operations near Phoenix Arizona. The measurements were performed in the principle plane and included the solar hot-spot. Detailed descriptions of the instrument and data collection methods are presented. The collected data is analyzed and compared to BRDF models of the solar hot-spot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple model for the spectral radiance of various objects in the field was developed at IARD. The model was verified in a field experiment in which the spectral radiances from three types of natural objects, in the 8-12 micron wavelength band, were measured. The results were compared with spectra that were calculated according to the model. At first, a rather poor agreement between the results was observed. Calculating the in-situ spectral emissivity values of the objects by application of a temperature-emissivity method alleviated this problem considerably. Further analysis has shown that most of the residual disagreement between the results was due to incorrect predictions of the path radiance that were made by the MODTRAN 4 code. Good agreement between the calculated and the measured radiance values was achieved after recalculating the path radiance by a physically based method. Another interesting result was that the influence of the sky radiation was not negligible and had to be dealt with properly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral emissivity measurements gathered in the longwave infrared region of the spectrum during a recent airborne hyperspectral data collection experiment indicated that the spectral emissivity of certain organic polymers changed by as much as 10% throughout the day. Inorganic and many other organic materials that were measured at the same time during this experiment showed no change. As this was an unexpected event, a subsequent experiment was designed to make emissivity measurements of several organic and inorganic materials over a 24-hour period/diurnal cycle. The results from this experiment confirmed that certain materials showed a significant spectral emissivity variation over this period. This paper will discuss some possible explanations for this variation and emphasize the significance and implications of this fact on the integrity of spectral emissivity measurements and spectral libraries being constructed in this wavelength region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric Measurement Instrumentation and Remote Sensing
The Atmospheric Infrared Sounder (AIRS) is an ultraspectral infrared instrument on the EOS Aqua Spacecraft, launched on May 4, 2002. AIRS has 2378 infrared channels ranging from 3.7 μm to 15.4 μm and a 13.5 km footprint. AIRS, in conjunction with the Advanced Microwave Sounding Unit (AMSU) produces temperature profiles with 1K/km accuracy on a global scale, as well as water vapor profiles and trace gas amounts for CO2, CO, SO2, Ozone and Methane. AIRS will be used for weather forecasting and studies of global climate change. The technology developed on the AIRS project includes advanced grating spectrometer optics, long-wavelength cutoff HgCdTe infrared detectors and active cooling. A status of the AIRS data products is presented. Advancements in the last decade allow the AIRS measurement to be made with better than 1 km spatial resolution. The higher spatial resolution will improve regional forecasting and reduce computational noise in trace gas measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU/HSB are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and 1 km tropospheric layer precipitable water with an rms error of 20%, in cases with up to 80% effective cloud cover. Pre-launch simulation studies indicated that these results should be achievable. Minor modifications have been made to the pre-launch retrieval algorithm as alluded to in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and temperature profiles are validated as a function of retrieved effective fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small. Select fields are also compared to those contained in the ECMWF analysis, done without the benefit of AIRS data, to demonstrate information that AIRS can add to that already contained in the ECMWF analysis. Assimilation of AIRS temperature soundings in up to 80% cloud cover for the month of January 2003 into the GSFC FVSSI data assimilation system resulted in improved 5 day forecasts globally, both with regard to anomaly correlation coefficients and the prediction of location and intensity of cyclones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper outlines the requirements and methodologies for development of the NOAA/NASA GOES-R Hyperspectral Environmental Suite (HES) instrument. The HES instrument is currently being developed within the framework of the GOES Program to fulfill the future needs and requirements of the National Environmental Satellite, Data, and Information Service (NESDIS). As an integral component of the GOES-R series satellites, HES will provide measurements of the traditional temperature and water vapor vertical profiles with higher accuracy and vertical resolution and may provide a back-up imaging mode. An overview of the HES requirements are presented at the GOES-R mission level, spacecraft level and instrument level with an overview of the GOES-R project methodologies to achieve the advanced functional objectives of the GOES Program partnership.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The GOES satellites will fly a Hyperspectral Environmental Suite (HES) on GOES-R in the 2012 timeframe. The approximately 1500 spectral channels (technically ultraspectral), leading to improved vertical resolution, and approximately five times faster coverage rate planned for the sounder in this suite will greatly exceed the capabilities of the current GOES series instrument with its 18 spectral channels. In the GOES-R timeframe, frequent measurements afforded by geostationary orbits will be critical for numerical weather prediction models. Since the current GOES soundings are assimilated into numerical weather prediction models to improve the validity of model outputs, particularly in areas with little radiosonde coverage, this hyperspectral capability in the thermal infrared will significantly improve sounding performance for weather prediction in the western hemisphere, while providing and enhancing other products. Finer spatial resolution is planned for mesoscale observation of water vapor variations. The improvements over the previous GOES sounders and a primary difference from other planned instruments stem from two-dimensional focal plane array availability. These carry an additional set of challenges in terms of instrument specifications, which will be discussed. As a suite, HES is planned with new capabilities for coastal ocean coverage with the goal of including open ocean coverage. These new planned imaging applications, which will be either multispectral or hyperspectral, will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The MODTRAN5 radiation transport (RT) model is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. New model features include (1) finer spectral resolution via the Spectrally Enhanced Resolution MODTRAN (SERTRAN) molecular band model, (2) a fully coupled treatment of auxiliary molecular species, and (3) a rapid, high fidelity multiple scattering (MS) option. The finer spectral resolution improves model accuracy especially in the mid- and long-wave infrared atmospheric windows; the auxiliary species option permits the addition of any or all of the suite of HITRAN molecular line species, along with default and user-defined profile specification; and the MS option makes feasible the calculation of Vis-NIR databases that include high-fidelity scattered radiances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The compensation for atmospheric effects in the VNIR/SWIR has reached a mature stage of development with many algorithms available for application (ATREM, FLAASH, ACORN, etc.). Compensation of LWIR data is the focus of a number of promising algorithms. A gap in development exists in the MWIR where little or no atmospheric compensation work has been done yet an increased interest in MWIR applications is emerging. To obtain atmospheric compensation over the full spectrum (visible through LWIR), a better understanding of the radiative effects in the MWIR is needed. The MWIR is characterized by a unique combination of reduced solar irradiance and low thermal emission (for typical emitting surfaces), both providing relatively equal contributions to the daytime MWIR radiance. In the MWIR and LWIR, the compensation problem can be viewed as two interdependent processes: compensation for the effects of the atmosphere and the uncoupling of the surface temperature and emissivity. The former requires calculations of the atmospheric transmittance due to gases, aerosols, and thin clouds and the path radiance directed towards the sensor (both solar scattered and thermal emissions in the MWIR). A framework for a combined MWIR/LWIR compensation approach is presented where both scattering and absorption by atmospheric particles and gases are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes) is a first-principles atmospheric correction algorithm for visible to shortwave infrared (SWIR) hyperspectral data. The algorithm consists of two main steps. The first is retrieval of atmospheric parameters, visibility (which is related to the aerosol type and distribution) and column water vapor. The second step is solving the radiation transport equation for the given aerosol and column water and transformation to surface reflectance. The focus of this paper is on the FLAASH water vapor retrieval algorithm. Modeled radiance values in the spectral region of one water vapor absorption feature are calculated from MODTRAN 4 using several different water vapor amounts and are used to generate a Look-Up Table (LUT). The water band typically used is 1130 nm but either the 940 or 820 nm band may also be used. Measured radiance values are compared to the LUT to determine the column water vapor amount for each pixel in the scene. We compare the results of water retrievals for each of these bands and also the results of their corresponding reflectance retrievals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimal Spectral Sampling (OSS) is a new approach to radiative transfer modeling which addresses the need for algorithm speed, accuracy, and flexibility. The OSS technique allows for the rapid calculation of radiance for any class of multispectral, hyperspectral, or ultraspectral sensors at any spectral resolution operating in any region from microwave through UV wavelengths by selecting and appropriately weighting the monochromatic points that contribute over the sensor bandwidth. This allows for the calculation to be performed at a small number of spectral points while retaining the advantages of a monochromatic calculation such as exact treatment of multiple scattering and/or polarization. The OSS method is well suited for remote sensing applications which require extremely fast and accurate radiative transfer calculations: atmospheric compensation, spectral and spatial feature extraction, multi-sensor data fusion, sub-pixel spectral analysis, qualitative and quantitative spectral analysis, sensor design and data assimilation. The OSS was recently awarded a U.S. Patent (#6,584,405) and is currently used as part of the National Polar-Orbiting Operational Environmental Satellite System (NPOESS) CrIS, CMIS, and OMPS-IR environmental parameter retrieval algorithms. This paper describes the theoretical basis and development of OSS and shows examples of the application and validation of this technique for a variety of different sensor types and applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of a microwave radiometer and an infrared interferometer and is scheduled to fly on the NPP and NPOESS satellites. The sensors are designed for the accurate measurement of atmospheric pressure, temperature and moisture profiles. This paper presents an overview of the CrIMSS sensors and the retrieval algorithm. Validation of the algorithm with current satellite sounder data will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparison between classification results when hyperspectral imagery is pre-processed by spectral resolution enhancement algorithms and/or atmospheric correction algorithms. Different combinations of pre-processing options were investigated. Overall, resolution enhancement does improve classification accuracy with and without atmospheric correction. Furthermore, classification accuracy using radiance data enhanced by resolution enhancement techniques was higher than accuracies obtained by atmospherically corrected data even when it was enhanced. AVIRIS data from the Indian Pines test site in NW Indiana was used to illustrate the different concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spectral exploitation of hyperspectral imaging (HSI) data is
based on their representation as vectors in a high dimensional
space defined by a set of orthogonal coordinate axes, where each
axis corresponds to one spectral band. The larger number of bands,
which varies from 100-400 in existing sensors, makes the storage,
transmission, and processing of HSI data a challenging task. A
practical way to facilitate these tasks is to reduce the dimensionality of HSI data without significant loss of information. The purpose of this paper is twofold. First, to provide a concise review of various approaches that have been used to reduce the dimensionality of HSI data, as a preprocessing step for compression, visualization, classification, and detection applications. Second, we show that the nonlinear and nonnormal structure of HSI data, can often be more effectively exploited by using a nonlinear dimensionality reduction technique known as local principal component analyzers. The performance of the various techniques is illustrated using HYDICE and AVIRIS HSI data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging (HSI) sensors provide imagery with hundreds of spectral bands, typically covering VNIR and/or SWIR wavelengths.
This high spectral resolution aids applications such as terrain classification and material identification, but it can also produce imagery that occupies well over 100 MB, which creates problems for
storage and transmission. This paper investigates the effects of lossy compression on a representative HSI cube, with background classification serving as an example application. The compression scheme first performs principal components analysis spectrally, then discards many of the lower-importance principal-component (PC) images, and then applies JPEG2000 spatial compression to each of the individual retained PC images. The assessment of compression effects considers both general-purpose distortion measures, such as root mean square difference, and statistical tests for deciding whether compression causes significant degradations in classification. Experimental results demonstrate the effectiveness of proper PC-image rate allocation, which enabled compression at ratios of 100-340 without producing significant classification differences. Results also indicate that distortion might serve as a predictor of compression-induced changes in application performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral Imagery (HSI) data are inherently more expensive and difficult to acquire than Multispectral Imagery (MSI) data. Most HSI acquisitions are currently limited to relatively low spatial resolution with the inherent limitation that they are not able to spatially resolve many important targets. Higher spatial resolution MSI data are available, however, they typically don’t provide sufficient spectral resolution to allow unambiguous identification of specific targets. We describe a new method using predictive modeling of combined HSI/MSI data to generate simulated high-spatial-
resolution, high-spectral-resolution image data. Targets observed by the low spatial resolution HSI sensor are spectrally resolved through linear spectral unmixing, but not spatially resolved. Targets observed by higher spatial resolution MSI sensor are partially spatially resolved but lower spectral resolution doesn’t allow target identification. The fused data are used to predict the subpixel locations of HSI-identified subpixel targets within the superior MSI spatial resolution. Comparison of individual HSI endmember spectra to MSI signatures at multiple MSI pixels allows
mapping the spatial locations of specific HSI endmembers creating a simulated HSI dataset that allows location of spectrally resolved targets. The combined data incorporate the best features of HSI/MSI to allow improved target detection and characterization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a research project evaluating the success of spatially sharpening spaceborne hyperspectral imagery with higher spatial resolution spaceborne multispectral imagery. In November 2001, NASA launched its first operational hyperspectral sensor, Hyperion, on the Earth Observing-1 spacecraft. Designed to study the utility of hyperspectral imagery collected from a space-based platform, Hyperion’s utility is limited for certain applications by its spatial resolution. Successful spatial sharpening of Hyperion imagery with higher spatial resolution spaceborne multispectral imagery such as ASTER or QuickBird should further enhance the value of Hyperion imagery. In a previous two-year research initiative, the primary author demonstrated the utility of sharpening airborne hyperspectral imagery with higher spatial resolution airborne multispectral imagery. In its first year, that study simulated the sharpening process by utilizing a single airborne hyperspectral dataset to derive both a lower spatial resolution hyperspectral cube and a higher spatial resolution multispectral dataset. In its second year, this study utilized two airborne spectral collections on different platforms for the hyperspectral and multispectral datasets, and successfully demonstrated spatial sharpening despite geometry differences between the two platforms. The next logical step is to validate spatial sharpening of space-based hyperspectral imagery using higher resolution spaceborne multispectral imagery. This current research project evaluated the spatial and spectral utility of Hyperion imagery after sharpening with higher spatial resolution spaceborne multispectral data from the ASTER or QuickBird satellites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for hyperspectral edge detection is presented. HySPADE (for hyperspectral/spatial detection of edges) simultaneously utilizes spectral and spatial information. HySPADE accepts as input a hyperspectral information (HSI) data cube; VNIR/SWIR radiance and reflectance spectra are used here. A transformed data cube is contructed by finding the spectral angle (SA) between each pixel in the original data cube with every other pixel in the same cube. Thus, band 1 of the tansformed cube contains the SA of the spectrum in pixel location (1,1) of the original cube with every other spectrum. Band 2 of the transformed cube contains the SA value of the pixel at location (1,2) with every other spectrum in the cube; and so on. In practice, HySPADE is applied to an N x N window of the original HSI cube; thus the transformed SA cube has dimensions of N samples x N lines x (NxN) bands. Each spectrum in the transformed SA cube contains information about spatial changes in composition of materials as they occur within the scene: the band number of each spectrum in the SA cube is easily translated into the (sample, line) address in the original HSI cube. Each spectrum in the transformed SA cube is analyzed with a one-dimensional edge-detector; a first-order finite-difference is used. Band number coordinates of detected edges are transformed back into (sample, line) addresses in the original HSI cube and mapped to form an edge-detected image. HySPADE output is shown. Extensions of the HySPADE concept are suggested as are applications for HySPADE in HSI analysis and exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-, Hyper-, and Ultra-Spectral Sensor Characterization and Calibration II
The Cross-track Infrared Sounder (CrIS), like most Fourier Transform spectrometers, can be sensitive to mechanical disturbances during the time spectral data is collected. The Michelson interferometer within the spectrometer modulates input radiation at a frequency equal to the product of the wavenumber of the radiation and the constant optical path difference (OPD) velocity associated with the moving mirror. The modulation efficiency depends on the angular alignment of the two wavefronts exiting the spectrometer. Mechanical disturbances can cause errors in the alignment of the wavefronts which manifest as noise in the spectrum. To mitigate these affects CrIS will employ a laser to monitor alignment and dynamically correct the errors. Additionally, a vibration isolation system will damp disturbances imparted to the sensor from the spacecraft. Despite these efforts, residual noise may remain under certain conditions. Through simulation of CrIS data, we demonstrated an algorithmic technique to correct residual dynamic alignment errors. The technique requires only the time-dependent wavefront angle, sampled coincidentally with the interferogram, and the second derivative of the erroneous interferogram as inputs to compute the correction. The technique can function with raw interferograms on board the spacecraft, or with decimated interferograms on the ground. We were able to reduce the dynamic alignment noise by approximately a factor of ten in both cases. Performing the correction on the ground would require an increase in data rate of 1-2% over what is currently planned, in the form of 8-bit digitized angle data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The operation of tomographic spectral imaging devices is analogous to problems in image restoration with similar restoration techniques. Generally the problem is cast as restoration of a sparse, singular kernel where both accuracy and computational speed are trade off issues. While there is much conventional wisdom concerning the ability to restore such systems, experience has shown that the situation is often less bleak than imagined. Results of the restoration of several tomographic instruments are presented with a series of improvements which are the result of both ad hoc numerical techniques and theoretical constraints. The influence of physical hardware on restoration results is discussed as well as counter intuitive lessons learned from a multi-year program to develop efficient restoration techniques for tomographic imaging spectrometers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A test was undertaken in Tucson, AZ to simultaneously measure the four components of the Stokes vector with a Lenticular Prismatic Polarization Integrating (LPPI) Filter. Simultaneous measurements were taken with a tomographic hyperspectral imaging instrument. Data were taken in the visible spectral band of a variety of scenes over a diurnal period to find portions of objects which possessed a relatively high degree of polarization (DOP; total, linear, and circular). Spectral content of both natural and man made objects was analyzed as well as the spectral content of the areas which possessed a relatively high DOP to ascertain if relatively high DOP objects have similar spectra. The objective of the study is the development of techniques which enable estimation of the DOP of objects from an analysis of the spectral content alone, thus enabling both multispectral processing and polarization detection without the need for separate polarization instrumentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spectrum of most objects in a hyperspectral image is oversampled in the spectral dimension due to the images having many closely spaced spectral samples. This oversampling implies that there is redundant information in the image which can be exploited to reduce the noise, and so increase the correct classification percentage. Oversampling techniques have been shown to be useful in the classification of hyperspectral imagery. These previous techniques consist of a lowpass filter in the spectral dimension whose characteristics are chosen based on the average spectral density of many objects to be classified. A better way of selecting the characteristics of the filter is to calculate the spectral density and oversampling of each object, and use that to determine the filter. The algorithm proposed here exploits the fact that the system is supervised to determine the oversampling rate, using the training samples for this purpose. The oversampling rate is used to determine the cutoff frequency for each class, and the highest of these is used to filter the whole image. Two pass approaches, where each class in the image is filtered with its own filter, were studied, but the increase in performance did not justify the increase in computational load. The results of applying these techniques to data to be classified are presented. The results, using AVIRIS imagery, show a significant improvement in classification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NGA has initiated a program to evaluate high resolution commercial sensors developed by Space Imaging, DigitalGlobe, Orbimage and others. Recent evaluations have involved QuickBird panchromatic products including the Basic 1B, Ortho-Ready Standard 2A, orthorectified, and Basic Stereo Pair products. This paper presents the results of an additional geo-positional accuracy evaluation of multispectral QuickBird Ortho-Ready Standard 2A images. These products were compared to a globally distributed set of Ground Control Points (GCPs), and the calculated geo-positional accuracy was compared to the published QuickBird specifications and to the panchromatic Ortho-Ready Standard 2A product. The results of the panchromatic and multispectral bands are compared, and the band-to-band registration of the multispectral arrays is evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show that moment invariants of spectral distributions provide region descriptors that are independent of changes in the illumination and atmospheric conditions. These invariants can be computed efficiently for band subsets of hyperspectral data. Moment invariants have several applications in hyperspectral data processing. They can be used for the registration of image regions acquired at different times and for the transformation of images acquired at different times to a canonical invariant representation for comparison. Moment invariants can also be used for the recognition of image regions under unknown conditions. We demonstrate the properties of moment invariants using hyperspectral images synthesized using DIRSIG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problems of imagery registration, conflation, fusion and search require sophisticated and robust methods. An algebraic approach is a promising new option for developing such methods. It is based on algebraic analysis of features represented as polylines. The problem of choosing points when attempting to prepare a linear feature for comparison with other linear features is a significant challenge when orientation and scale is unknown. Previously we developed an
invariant method known as Binary Structural Division (BSD). It is shown to be effective in comparing feature structure for specific cases. In cases where a bias of structure variability exists however, this method performs less well. A new method of Shoulder Analysis (SA) has been found which enhances point selection, and improves the BSD method. This paper describes the use of shoulder values, which compares the actual distance traveled along a feature to the linear distance from the start to finish of the segment. We show that shoulder values can be utilized within the BSD method,
and lead to improved point selection in many cases. This improvement allows images of unknown scale and orientation to be correlated more effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examine the performance of illumination-invariant face recognition in outdoor hyperspectral images using a database of 200 subjects. The hyperspectral camera acquires 31 bands over the 700-1000nm spectral range. Faces are represented by local spectral information for several tissue types. Illumination variation is modeled by low-dimensional spectral radiance subspaces. Invariant subspace projection over multiple tissue types is used for recognition. The experiments consider various face orientations and expressions. The analysis includes experiments for images synthesized using face reflectance images of 200 subjects and a database of over 7,000 outdoor illumination spectra. We also consider experiments that use a set of face images that were acquired under outdoor illumination conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation, will be discussed and its features illustrated with sample calculations. MCScene is based on a Direct Simulation Monte Carlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and water surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review the more recent upgrades to the model, including the development of an approach for incorporating direct and scattered thermal emission predictions into the MCScene simulations. Calculations presented in the paper include a full optical spectrum simulation from the visible to the LWIR for a desert scene. This scene was derived from an AVIRIS visible to SWIR HSI data collect over the Virgin Mountains in Nevada, extrapolated to the thermal IR. Other calculations include complex 3D clouds over urban and rural terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a nonlinear algorithm for estimating surface spectral reflectance from the spectral radiance measured by an airborne sensor. Estimation of surface reflectance is of importance since
it is independent of the atmospheric and illumination conditions.
The nonlinear separation algorithm uses a low-dimensional subspace
model for the reflectance spectra. The algorithm also considers
the inter-dependence of the path radiance and illumination spectra by
using a coupled subspace model. We have applied the algorithm to
a large set of simulated 0.4-1.74 micron sensor radiance spectra. A database of reflectance vectors and MODTRAN illumination, path radiance, and upward transmittance vectors for different atmospheric conditions were used to generate the sensor radiance spectra. We have examined the use of the recovered reflectance vectors for material identification over a database of materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current image quality approaches are designed to assess the utility of single band images by trained image analysts. While analysts today are certainly involved in the exploitation of spectral imagery, automated tools are generally used as aids in the analysis and offer hope in the future of significantly reducing the timeline and analysis load. Thus, there is a recognized need for spectral image quality metrics that include the effects of automated algorithms. We have begun initial efforts in this area through the use of a parametric modeling tool to gain insight into parameter dependence on system performance in unresolved object detection applications. An initial Spectral Quality Equation (SQE) has been modeled after the National Imagery Interpretation Rating Scale General Image Quality Equation (NIIRS GIQE). The parameter sensitivities revealed through the model-based trade studies were assessed through comparison to analogous studies conducted with available data. This current comparison has focused on detection applications using sensors operating in the VNIR and SWIR spectral regions. The SQE is shown with key image parameters and sample coefficients. Results derived from both model-based trade studies and empirical data analyses are compared. Extensions of the SQE approach to additional application areas such as material identification and terrain classification are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developing proper models for Hyperspectral imaging (HSI) data allows for useful and reliable algorithms for data exploitation. These models provide the foundation for development and evaluation of detection, classification, clustering, and estimation algorithms. To date, most algorithms have modeled real data as multivariate normal, however it is well known that real data often exhibits non-normal behavior. In this paper, Elliptically Contoured Distributions (ECDs) are used to model the statistical variability of HSI data. Non-homogeneous data sets can be modeled as a finite mixture of more than one ECD, with different means and parameters for each component. A larger family of distributions, the family of ECDs includes the multivariate normal distribution and exhibits most of its properties. ECDs are uniquely defined by their multivariate mean, covariance and the distribution of its Mahalanobis distance metric. This metric lets multivariate data be identified using a univariate statistic and can be adjusted to more closely match the longer tailed distributions of real data. One ECD member of focus is the multivariate t-distribution, which provides longer tailed distributions than the normal, and has an F-distributed Mahalanobis distance statistic. This work will focus on modeling these univariate statistics, using the Exceedance metric, a quantitative goodness-of-fit metric developed specifically to improve the accuracy of the model to match the long probabilistic tails of the data. This metric will be shown to be effective in modeling the univariate Mahalanobis distance distributions of hyperspectral data from the HYDICE sensor as either an F-distribution or as a weighted mixture of F-distributions. This implies that hyperspectral data has a multivariate t-distribution. Proper modeling of Hyperspectral data
leads to the ability to generate synthetic data with the same
statistical distribution as real world data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to detect and identify effluent gases is, and will continue to be, of great importance. This would not only aid in
the regulation of pollutants but also in treaty enforcement and
monitoring the production of weapons. Considering these applications, finding a way to remotely investigate a gaseous emission is highly desirable. This research utilizes hyperspectral imagery in the infrared region of the electromagnetic spectrum to evaluate an invariant method of detecting and identifying gases within a scene. The image is evaluated on a pixel-by-pixel basis and is studied at the subpixel level. A library of target gas spectra is generated using a simple slab radiance model. This results in a more robust description of gas spectra which are representative of real-world observations. This library is the subspace utilized by the
detection and identification algorithms. The subspace will be
evaluated for the set of basis vectors that best span the subspace. The Lee algorithm will be used to determine the set of basis vectors, which implements the Maximum Distance Method (MaxD). A Generalized Likelihood Ratio Test (GLRT) determines whether or not the pixel contains the target. The target can be either a single species or a combination of gases. Synthetically generated scenes will be used for this research. This work evaluates whether the Lee invariant algorithm will be effective in the gas detection and identification problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identification of constituent gases in effluent plumes is performed using linear least-squares regression techniques. Overhead thermal hyperspectral imagery is used for this study. Synthetic imagery is employed as the test-case for algorithm development. Synthetic images are generated by the Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model. The use of synthetic data provides a direct measure of the success of the algorithm through the comparison with truth map outputs. In image test-cases, plumes emanating from factory stacks will have been identified using a separate detection algorithm. The gas identification algorithm being developed in this work will then be used only on pixels having been determined to contain the plume. Stepwise linear regression is considered in this study. Stepwise regression is attractive for this application as only those gases truly in the plume will be present in the final model. Preliminary results from the study show that stepwise regression is successful at correctly identifying the gases present in a plume. Analysis of the results indicates that the spectral overlap of absorption features in different gas species leads to false identifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, interest in gaseous effluent detection, identification, and quantification has increased for both commercial and government applications. However, the problem of gas detection is significantly different than the problems associated with the detection of hard-targets in the reflective spectral regime. In particular, gas signatures can be observed in either emission or absorption, are both temperature and concentration dependent, and are viewed in addition to the mixed background pixel signature from the ground. This work applies standard hard-target detection schemes to thermal hyperspectral synthetic imagery. The methods considered here are Principal Components Analysis, Projection Pursuit, and a Spectral Matched Filter. These methods will be compared both quantitatively and qualitatively with respect to their applicability to the gas detection problem. Comparison to truth outputs from the synthetic data provides an accurate quantitative measure of the algorithmic performance. Principle Components and Projection Pursuit are shown to have similar performance, and both are better than the Spectral Matched Filter. Additionally, both Principal Components and Projection Pursuit demonstrate the ability to separate regions of absorption and emission in the plume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel statistical method for the retrieval of surface temperature and atmospheric temperature, moisture, and ozone profiles has been developed and evaluated with simulated clear-air hyperspectral data at 3.5-5 and 7-13.5 microns. These estimates are used as inputs to MODTRAN to calculate the ground leaving radiance (L), the upwelling radiance (U), and the total path transmittance (T). The spectral surface emissivity is then derived by spectrally filtering the resulting solution to the radiative transfer equation. The retrieval for surface temperature and spectral surface emissivity can then be iterated, if necessary. The method was evaluated using the NOAA88b profile dataset and the UCSB and ASTER spectral emissivity libraries, and the sensor parameters were developed using the FASSP (Forecasting and Analysis of Spectroradiometer System Performance) model. Representative results are shown for simulated data from two spaceborne sensors: a high spectral resolution infrared sensor
(AIRS, NASA Aqua satellite, 3.5-15.5 μm, λ/Δλ ≈ 1200) and a hypothetical moderate spectral resolution infrared sensor (3.5-5 and 7-13.5 μm, λ/Δλ ≈ 200). A sample retrieval of surface temperature and emissivity is also shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on simulated atmospheric and sensor effects, we identify spectral resolution and per-channel signal-to-noise ratio (SNR) requirements for thermal infrared spectrometers that allow effluent quantification to any desired precision. This work is based on the use of MODTRAN-4 to explore the effects of temperature contrast and effluent concentration on the spectral slopes of particular absorption features. These slopes can be estimated from remotely sensed spectral data by use of least-squares techniques. The precision of these estimates is based on two factors related to spectral quality: the number of spectral samples that lie along an absorption feature and the radiometric accuracy of the samples themselves. The least-squares process also calculates the slope estimation error variance, which is related to the effluent quantification uncertainty by the same function that maps the slope itself to effluent quantity. The effluent quantification precision is thus shown to be a function of the spacing between spectral channels and the per-channel SNR. The relationship between SNR, channel spacing and effluent quantification precision is expressed as an equation defining a surface of constant "difficulty." This surface can be used to evaluate parameter sensitivities of sensors in design, to appropriately task sensors, or to evaluate effluent quantification tasks in terms of feasibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new algorithm for feature extraction on hyperspectral images based on blind source separation (BSS) and distributed processing. I use Independent Component Analysis (ICA), a particular case of BSS, where, given a linear mixture of statistical independent sources, the goal is to recover these components by producing the unmixing matrix. In the multispectral/hyperspectral imagery, the separated components can be associated with features present in the image, the source separation algorithm projecting them in different image bands. ICA based methods have been employed for target detection and classification of hyperspectral images. However, these methods involve an iterative optimization process. When applied to hyperspectral data, this iteration results in significant execution times. The time efficiency of the method is improved by running it on a distributed environment while preserving the accuracy of the results. The design of the distributed algorithm as well as issues related to the distributed modeling of the hyperspectral data were taken in consideration and presented. The effectiveness of the proposed algorithm has been tested by comparison to the sequential source separation algorithm using data from AVIRIS and HYDICE. Preliminary results indicate that, while the accuracy of the results is preserved, the new algorithm provides a considerable speed-up in processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a generalized random field model in a random environment to classify hyperspectral textures. The model generalizes traditional random field models by allowing the spatial interaction parameters of the field to be random variables. Principal component analysis is used to reduce the dimensionality of the data set to a small number of spectral bands that caputure almost all of the energy in the original hyperspectral textures. Using the model we obtain a compact feature vector that efficiently computes within and between band information. Using a set of hyperspectral samples, we evaluate the performance of this model for classifying textures and compare the results with other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report examines a series of multispectral or hyperspectral image data cubes collected of a given scene at different times. The diagonal, whitening/dewhitening, and target CV (Covariance) transformations use collected image data of spatially overlapping regions from data sets collected at different times to evolve target spectral signatures. The previously studied registration-free transformations are described using a single general equation and form a subset of a larger family of accurate transforms. The diagonal, whitening/dewhitening, and target CV transformations are characterized by a parameter n having n=0,1, 2 respectively. The transformed target signatures, used in matched filter searches, are tested on images taken from two data collects that use different sensors, targets, and backgrounds. Transforms with n between 0 and 2 yield the largest Target to Clutter Ratio (TCR) and remain relatively constant for 0
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effect of using Adaptive Wavelets is investigated for dimension reduction and noise filtering of hyperspectral imagery that is to be subsequently exploited for classification or subpixel analysis. The method is investigated as a possible alternative to the Minimum Noise Fraction (MNF) transform as a preprocessing tool. Unlike the MNF method, the wavelet-transformed method does not require an estimate of the noise covariance matrix that can often be difficult to obtain for complex scenes (such as urban scenes). Another desirable characteristic of the proposed wavelet transformed data is that, unlike Principal Component Analysis (PCA) transformed data, it maintains the same spectral shapes as the original data (the spectra are simply smoothed). In the experiment, an adaptive wavelet image cube is generated using four orthogonal conditions and three vanishing moment conditions. The classification performance of a Derivative Distance Squared (DDS) classifier and a Multilayer Feedforward Network (MLFN) neural network classifier applied to the wavelet cubes is then observed. The performance of the Constrained Energy Minimization (CEM) matched-filter algorithm applied to this data us also observed. HYDICE 210-band imagery containing a moderate amount of noise is used for the analysis so that the noise-filtering properties of the transform can be emphasized. Trials are conducted on a challenging scene with significant locally varying statistics that contains a diverse range of terrain features. The proposed wavelet approach can be automated to require no input from the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Realtime online processing is important to provide immediate data analysis for resolving critical situations in real applications of hyperspectral imaging. We have developed a Constrained Linear Discriminant Analysis (CLDA) algorithm, an excellent approach to hyperspectral image classification, and investigated its realtime online implementation. Because the required prior object spectral signatures may be unavailable in practice, we propose its unsupervised version in this paper. The new algorithm includes unsupervised signature estimation in realtime followed by realtime CLDA algorithm for classification. The unsupervised signature estimation is based on linear mixture model and least squares error criterion. The preliminary result using an HYDICE scene demonstrates its feasibility and effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.