PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A historical perspective on the evolution of performance evaluation technology for automatic target recognition (ATR) systems is presented. It is shown that the ad hoc and artistic evaluation techniques of the past are now evolving into scientific approaches. The most perspective areas of this technology include: (1) first principle coupled, multi-sensor modeling of objects, environment, atmosphere, and vegetation; (2) integration of models with the task of ATR algorithm development; (3) integration of a new generation of parallel processor that can process every pixel of an image without the need for data reduction; (4) extending the instrumentation control of ATR to include sensor selection and sensor fusion; and (5) development of signal metrics for radar, acoustic and ladar, and linking these metrics to phenomenological sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The near real-time end-to-end evaluation of automatic target
recognizers has been a reality in the laboratory environment for
several years. Depending on the reliability of ground truth
data, i.e., target location and sensor location in three
dimensions and sensor pointing angle, field test evaluations
could also be conducted at near real-time rates. End-to-end
evaluations, however, are inadequate for producing the quantity
and quality of data needed for understanding both successes and
failures of automatic recognition.
The C2NVEO has developed a facility, called AUTOSPEC, for the
expressed purpose of extracting intermediate data from
processors. Once extracted, the data are stored in a relational
database for subsequent analysis. The facility includes both the
hardware and a number of software tools, developed at C2NVEO,
designed to allow ready access to ground truth data, images,
metrics, processor parameters, processor intermediate decisions,
and processor final decisions. The structure of the system and
demonstrations of the tools developed will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performance measurements of several feature extraction
modules used in an automatic target cuer and automatic target
recognizer are described. The performance of these modules and
the methods used to obtain the performance are shown. These
performance measurements were obtained by observing the reaL-time
system during analysis of video FLIR data. Parameter
optimization methods are also outlined and the results obtained
in the optimization process are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an adaptive or self-learning filter design intended for use in real-time closed loop pointing control systems engaging multiple targets. The design approach is based upon use of a performance index (based upon the Mahalanobis generalized distance function) and multiple filters processed in parallel using the same nonlinear measurements as input. Application of performance index criteria to the statistics of individual filter residuals allows the selection of the optimum filter set without the time delays typically encountered and thereby allows the composite filter structure to adapt (or self-learn) to uncertainties in modeling target acceleration capabilities. An advantage of this approach is that it also provides to an operator (or a robotic controller) the confidence level of tracking system performance against a maneuvering target. This information is of interest for deployment of counter-measures (e.g., fire control eventing, alarms, engagement priority, etc) or simply for laboratory system tests of design adequacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of the development of a real-time JR target processor test bed, a number of image processing algorithms were developed, simulated in software, and evaluated for implementation. Algorithms performing image pre-processing, target localization, segmentation and target/clutter discrimination were evaluated using an IR image data base. The algorithms selected are being implemented on the test bed using commercially available board-level components, and are capable of pmcessing imagery at real-time rates (30 frames/see).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing to accomplish automatic recognition of military vehicles has promised increased weapons systems effectiveness and reduced timelines for a number of Depariment of Defense missions. Automatic Target Recognizers (ATh) are often claimed to be able to recognize many different types of vehicles as possible targets in military targeting applications. The targeting scenario conditions include different vehicle poses and histories as well as a variety of imaging geometries, intervening atmospheres, and background environments. Testing these ATh subsystems in most cases has been limited to a handful of the scenario conditions of interest, as is represented by imagery collected with the desired imaging sensor. The question naturally arises as to how robust the performance of the ATh is for all scenario conditions of interest, not just for the set of imagery upon which an algorithm was trained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army has a requirement to develop systems for the detection and identification of ground targets in a clutter environment. Autonomous Homing Munitions (AHM) using infrared, visible, millimeter wave and other sensors are being investigated for this application. Advanced signal processing and computational approaches using pattern recognition and artificial intelligence techniques combined with multisensor data fusion have the potential to meet the Army's requirements for next generation ARM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of validated, synthetic multisensor databases is crucial for the development, testing, and evaluation of advanced Automatic Target Recognition algorithms. Real multisensor scene combinations reflecting operational requirements, missions, and targets are in many cases unavailable and economically difficult to obtain in the variety and quantity needed. This per describes the Texas Instruments (TI's) Synthetic Multisensor (IR, TV, Laser Radar) Image Generation System which has been developed to address this database problem and the procedures used to validate the synthetic imagery. The system generates single frames or image sequences which are based upon scenarios derived directly from DMA digital map data. A user interface designed to allow the user to control the operation of the synthetic image generation system, and to create, modify, and control the associated scenario is described along with the sensor models, atmospheric environment models, and scene rendering software utilized. In addition, validation of the synthetic imagery with respect to real imagery using a feature based methodology is addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The PRISM model produces inherent temperatures of a target's external facets. Conventional predictions of target detection/recognition simpiy sum these temperatures on an area weighted basis to determine the average target temperature. In turn, that temperature is subtracted from background to determine the delta-T value for entering the MRT curve. We have jroduced a more accurate and representative methodology for computing delta-T. First, the PRISM faceted target model is represented in our VALUE software, which accounts for mutual surface blockage. A grey scale image of the PRISM- generated facet temperatures is produced. The image is convolved with an eye-equivalent MTF, and thence delta-T computed based upon absolute values of differences between background and the target's (unobscured) facets. These results are further normalized with respect to uniform (single temperature) targets. A synergistic calculation determines a more representative measure ofthe target's minimum angular size, which is also needed for predicting probability of detection/recognition. We compute the target's orientation from its image silhouette moments, rotate the target around its silhouette center of mass, encapsulate it with a rectangle, and then count the target extent across each scanline. These counts are averaged into an equivalent angular size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid growth of the signal and image processing technology in the last several decades has arisen the need for means of evaluating and comparing the numerous algorithms and systems that are created or are being developed. Performance evaluation, in the past, has been mostly ad hoc and incohesive. In this paper we present a systematic step by step approach for the scientific evaluation of signal and image processing algorithms and systems. This approach is based on the methodology of Experimental Design. We illustrate this method by means of an example from the field of automatic object recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adequate tools for diagnosis and evaluation of Automatic Target Recognition (ATh) systems are very critical for their successful development. In this paper we describe system called: Automated Instrumentation and Evaluation (Auto-I). Auto-I provides many of the needed capabilities for rapid testing and evaluation of ATR systems. It also provides a module for automatic adaptation of algorithms parameters using algorithms performance models, optimization and Artificial Intelligence techniques. The current design of Auto-I is modular, it is designed so it can be interfaced to other ATh systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A visible band signature model is under development at the Georgia Tech Research Institute (GTRI) to support research activities ranging from performance studies of human observers to the definition and development of feature extraction algorithms. This model generates visible band imagery based on a solar illumination model coupled with computer graphics rendering algorithms. The solar illunination model employs a modified version of a radiative transfer algorithm originally developed for the Air Force. Selection of an appropriate reflection model followed an evaluation of several techniques. The selection criteria and results of the survey are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are two demanding needs which must be fulfilled to further the development of automatic target recognition systems. One is analytical modeling for performance prediction and the other is a disciplined evaluation methodology for automatic target recognition algorithms. Currently both areas are in their infancy.
The analytical modeling for automatic target recognition performance evaluation and for prediction of algorithm performances has been investigated, both as they relate to human visual performances and as a tool for algorithm development.
The matched filter approach presented a good limiting performance for target detection in uncluttered scenes with complete knowledge- of the target characteristics. The comparision of the matched filter detection performance to human performance model have produced some interesting results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I propose to incorporate two new kinds of groundtruth and an infrared (IR) target model in a causal automatic target recognizer
(ATh) performance evaluation scheme. In principle, this scheme allows for the detailed causal analysis of segmentor
performance and avoids the misleading and uninformative aspects of current statistical segmentor performance evaluation
techniques. The scheme also allows for the validation and partial construction of IR target models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The possibility of representing the two-dimensional (2D) orthogonal image of an arbitrary 3D object from any viewpoint and orientation is established. The novelty of the representation is that it consists of a single continuous analytic formula. This allows for the complete symbolic representation of an object and derivative features, and may aid in object recognition and the establishment of object recognition standards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time optical correlator employs a spatial light modulator (SLM) to record ongoing, changing scenes. Current SLMs are quite useful in this role although some suffer from limited dynamic range and, therefore cannot respond fully to such variations as light changes on the target. In this paper, a method is described for preprocessing an image before it is impressed upon an SLM. The processed image, in effect, alters the transfer characteristics and serves to make the image relatively invariant with changes in scene and environmental target conditions. The restructured image will appear invariant to the SLM - or at least quite constant - and, therefore, invariant to the optical matched filter residing in the memory. The correlator operates as though the target and scene conditions were fixed, confined to more acceptable, narrowed conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple, non-intrusive technique to separate mechanical jitter and aero-optic jitter from the combined measured jitter is
presented. This technique has the advantages of being relatively simple to implement, deriving information from the imaging
data, and giving insight into the nature of the flow field turbulence. This method employs variable collection apertures and a
subtraction algorithm designed to separate aero-optic jitter from mechanical jitter. Several levels of vibration and aero-optic
effects were measured and then separated in the post-test data processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In electronic vision systems, locating regions of interest-a process referred to as cueing-allows the computing power of the vision system to be focused on small regions rather than the entire scene. The purpose of this paper is to illustrate the ability of a new technique to locate regions that may contain objects of interest. This technique employs the mathematical theory of evidence to combine evidence received from disparate sources. Here the evidence consists of the images obtained from two sources: laser radar range and laser radar amplitude. The mean values of the super pixel gray levels for the two images are calculated and combined based on the Dempster-Shafer rule of combination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Solid state imaging devices are now widely used in a lot of measurement applications,and specifically in space applications (remote sensing,star trackers).The development of space stations leads to a growing number of new in-orbit operations as the RENDEZ-VOUS, which consists in the approach,the docking or berthing of two space vehicles. This paper describes a new way to operate charge coupled devices (CCD) which allows to identify target patterns and to perform measurements in a severe optical environment(sun in the field of view).This new operating mode combined with lighting by a high power pulsed laser diode enables to design an efficient system for relative position and attitude measurements of two spacecrafts.This system is proposed for the rendez-vous phase between the HERMES spaceplane and the COLUMBUS Free Flyer,and for the approach of the COLUMBUS Free Flyer to the FREEDOM space station.Experimental results are presented and the first applications are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When considering ways to automate the generation of image processing algoritms for object recognition tasks, one critical element is the availability of measures to assess the potential and actual ability of individual operations for making necessary discriminations. This paper discusses performance evaluation of image processing operators or algorithms from the perspective of trying to search automatically through a large space of them for one which satisfactorily performs a given recognition task. Performance is expressed in terms of accuracy, consistency, and cost, over a set of training images. The major issues of evaluating and choosing between operators in this context are discussed and examples are given of measures which can be used to evaluate classes of operators for applicability, as well as individual operators or parameter settings for actual performance. Examples are drawn primarily from binary morphology, with detailed extensions described for grey level morphological and linear operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iskra Centre for Electro-Optics has designed a new family of miniaturized Nd:YAG laser rangefinders, all of which are built around the same basic optoelectronic and mechanical module. On one of them (MLD monocular laser rangefinder) the emphasis is on its extremely small size and low weight, in another (BLD binocular laser rangefinder) on binocularity and comfortable observation, in the third (BLD-N night vision binocular rangefinder) on its operability under night conditions, and in all three on the ease of handling and high operation autonomy. They contain quite a few interesting technological solutions regarding the laser and processing electronics that is entirely microcomputer controlled. The main ones, which enable miniaturization and design flexibility and can be employed in other Iskra rangefinders, are described in the paper. Specifically discussed is the interdisciplinary design of laser rangefinders and the role of CAD in development of individual subunits.
Author estimates that due to their low price, simple handling and wide usability the new rangefinders will enlarge the domain of laser rangefinder application in the armed forces. Shown are application possibilities as well as the trends of further development of miniaturized laser rangefinders at Iskra Centre for Electro-Optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.