Conventional windows for airborne payloads are often discontinuous with the aircraft or pod skin. A protruding structure or hollow cavity increases aerodynamic drag, which consumes more fuel and thus reduces the amount of time available on-station. These geometries give rise to turbulent aero-optical effects, which can reduce the payload’s optical performance because it has to see through turbulence. This paper describes a multi-paned or segmented window concept that matches the local topology of the aircraft pod or skin. This approach is suitable for optical payloads having multiple fixed fields-of-view such as staring infrared search and track systems, but not scanning systems. This approach for creating a near-conformal window assembly should be particularly useful for rapid prototyping of windows for airborne optical payloads, providing a nearer-term alternative to monolithic windows that are ground and polished into complex shapes. In this paper, a 14-inch diameter pod faring with three window segments was chosen as a point design for a notional airborne optical payload. Fused silica planar windowpanes were fabricated with matching, mating mitered edges. The panes were chemically bonded directly to each other with a sodium-silicate solution. The bonding process and fixturing are described. The resulting glass bond is strong and minimizes the non-useable seam between panes. This approach increases the clear aperture of each pane compared with windowpanes bonded into individual mechanical bezels. Interferometric measurements of the prototype show no degradation in transmitted wavefront error after silicate bonding.
Aero-optic effects can have deleterious effects on high performance airborne optical sensors that must view through
turbulent flow fields created by the aerodynamic effects of windows and domes. Evaluating aero-optic effects early in
the program during the design stages allows mitigation strategies and optical system design trades to be performed to
optimize system performance. This necessitates a computationally efficient means to evaluate the impact of aero-optic
effects such that the resulting dynamic pointing errors and wavefront distortions due to the spatially and temporally
varying flow field can be minimized or corrected. To this end, an aero-optic analysis capability was developed within the
commercial software SigFit that couples CFD results with optical design tools. SigFit reads the CFD generated density
profile using the CGNS file format. OPD maps are then created by converting the three-dimensional density field into an
index of refraction field and then integrating along specified paths to compute OPD errors across the optical field. The
OPD maps may be evaluated directly against system requirements or imported into commercial optical design software
including Zemax® and Code V® for a more detailed assessment of the impact on optical performance from which design
trades may be performed.
Semi-autonomous operation of intelligent vehicles may require that such platforms maintain a basic situational
awareness with respect to people, other vehicles and their intent. These vehicles should be able to operate safely
among people and other vehicles, and be able to perceive threats and respond accordingly. A key requirement is
the ability to detect people and vehicles from a moving platform. We have developed one such algorithm using
video cameras mounted on the vehicle. Our person detection algorithms model the shape and appearance of
the person instead of modeling the background. This algorithm uses histogram of oriented gradients (HOG),
which model shape and appearance using image edge histograms. These HOG descriptors are computed on an
exhaustive set of image windows, which are then classified as person/non-person using a support vector machine
classifier. The image windows are computed using camera calibration, which provides approximate size of people
with respect to their location in the imagery. The algorithm is flexible and has been trained for different domains
such as urban, rural and wooded scenes. We have designed a sensor platform that can be mounted on a moving
vehicle to collect video data of pedestrians. Using manually annotated ground-truth data we have evaluated
the person detection algorithm in terms of true positive and false positive rates. This paper provides a detailed
overview of the algorithm, describes the experiments conducted and reports on algorithmic performance.
This paper presents an overview of Intelligent Video work currently under development at the GE Global Research Center
and other research institutes. The image formation process is discussed in terms of illumination, methods for automatic
camera calibration and lessons learned from machine vision. A variety of approaches for person detection are presented.
Crowd segmentation methods enabling the tracking of individuals through dense environments such as retail and mass
transit sites are discussed. It is shown how signature generation based on gross appearance can be used to reacquire targets
as they leave and enter disjoint fields of view. Camera calibration information is used to further constrain the detection
of people and to synthesize a top-view, which fuses all camera views into a composite representation. It is shown how
site-wide tracking can be performed in this unified framework. Human faces are an important feature as both a biometric
identifier and as a method for determining the focus of attention via head pose estimation. It is shown how automatic pan-tilt-
zoom control; active shape/appearance models and super-resolution methods can be used to enhance the face capture
and analysis problem. A discussion of additional features that can be used for inferring intent is given. These include
body-part motion cues and physiological phenomena such as thermal images of the face.
This paper addresses the automated detection of line features in large
industrial inspection images. The manual examination of these images
is labor-intensive and causes undesired delay of inspection results. Hence, it is desirable to automatically detect certain features of interest. In this paper we are concerned with the detection of vertical or slanted line features that appear at unpredictable intervals across the image. The line features may appear distorted due to shortcomings of the sensor and operator conditions. Line features are modeled as a pair of smoothed step edges of opposite polarity that are in close proximity, and two operators are used to detect them. The individual operator-outputs are combined in a non-linear fashion to form the line-feature response. The line features are then obtained by following the ridge of the line-feature response. In experiments on four datasets, over 98.8% of line features are correctly detected, with a low false-positive rate. Experiments also show that the approach works well in the presence of considerable noise due to poor operating conditions or sensor failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.