PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Passive millimeter wave (PMMW) imaging sensor technology has made significant advances in recent years to permit the development of manufacturable cameras which can be economically produced. In addition to its operation in adverse weather, the PMMW camera is non-emitting which makes it suitable for both military and civilian applications. For example, aircraft executing autonomous landing using GPS, need an all weather, real time, true image of the forward scene during the touch-down, roll-out, turn-off and taxi maneuvers. The PMMW camera not only provides such an image, but is easily implementable as a sensor for the pilot, and as a system which operates in an airport environment. We shall address these issues and discuss other applications of this new sensor technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is more common to use the visible or infrared regions to image although it is possible to use millimeter waves. Passive millimeter wave imaging, however, has the advantage of being able to see in poor weather conditions such as in thick fog. The images, unlike radar signatures, have a natural appearance that can be easily interpreted. The spatial resolution of these imagers is limited by the aperture size and choice of operating frequency. Novel signal processing algorithms have been applied to improve the spatial resolution. Millimeter wave imagers detect slight temperature differences in the scene and using current technology it is possible to sense changes as low as 0.2 K whilst the contrast between an aircraft and its background can be as high as 200 K. A millimetric imager has been used at London Heathrow airport to demonstrate the high quality of the images that can be obtained. Aircraft can be recognized, runways and grass delineated and complex areas such as gates imaged. A qualitative comparison has been made of radar, thermal imaging and passive millimeter wave imaging for ground movement control. The possibility of deploying a passive millimeter wave imager on a commercial aircraft and of using it as part of an enhanced vision system is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an experimental radar at 35 GHz in development at Daimler-Benz Aerospace, Ulm, Airborne Systems Division. This radar uses FMCW Frequency modulation waveforms with a frequency scanning antenna covering an azimuth sector of more than 30 degrees. Several signal processing algorithms, e.g. CFAR and contrast enhancement, have been developed for different applications. Due to the electronic scanning of the radar beam, an update rate of up to 15 pictures per second can be achieved as required for synthetic vision systems in aircraft. High resolution in both range and azimuth make this design suitable for a wide range of applications. The radar is suitable for use in helicopters or fixed-wing aircraft. Helicopter applications are obstacle warning (including wire detection), terrain avoidance, ground mapping and weather detection. Fixed wing aircraft applications are runway detection including detection of obstacles at the runway and taxiways. The demonstrator is used to verify the functionality of this radar design. Technical data and measurement results will be presented. Based on these measurements the radar performance will be evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared sensors at the nominal 8 - 12 and 3 - 5 micron wavebands respectively can be shown to have complementary performance characteristics when used over a range of meteorological conditions. The infrared/optical multisensor for the autonomous landing guidance system integrates staring longwave, midwave, and visible sensors into an environmentally sealed and purged assembly. The infrared modules include specific enhancements for the detection of runways under adverse weather conditions. The sensors incorporate pixel-for-pixel overlap registration, and the fields of view match a conformal head-up display with sensor/display boresighting to within a fraction of a pixel. Tower tests will be used to characterize the sensors and gather data to support simulation and image processing efforts. After integration with other elements of the autonomous landing guidance system, flight tests will be conducted on Air Force and commercial transport aircraft. In addition to display and analog video recording, the multisensor data will be digitally captured during critical flight test phases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses new techniques for providing a `FLIR like', multi-pixel range receiver for applications of control and guidance by an active LADAR system. The major tradeoffs in developing a LADAR sensor with multi-pixel high resolution capabilities using conventional techniques are large size, high cost, or a slow frame rate. SEO has conceived and is currently developing a new receiver technique using a Charge Coupled Device array element that shows great promise for overcoming all of these drawbacks. Although this technique is a new approach for LADAR sensors, it is a concept that has been used for decades in the receivers of common-module FLIR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce an apparatus and methodology to support realtime color imaging for night operations. Registered imagery obtained in the visible through near IR band is combined with thermal IR imagery using principles of biological color vision. The visible imagery is obtained using a Gen III image intensifier tube optically coupled to a conventional CCD, while the thermal IR imagery is obtained using an uncooled thermal imaging array, the two fields of view being matched and imaged through a dichroic beam splitter. Remarkably realistic color renderings of night scenes are obtained, and examples are given in the paper. We also describe a compact integrated version of our system in the form of a color night vision device, in which the intensifier tube is replaced by a high resolution low-light sensitive CCD. Example CCD imagery obtained under starlight conditions is also shown. The system described here has the potential to support safe and efficient night flight, ground, sea and search & rescue operations, as well as night surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image processing and analysis, the main parameters that are used to control the image transformations are the light intensity, the size and the orientation of an object. Mathematical Morphology has operators that are controlled with these parameters. Moreover, there is a class of operators, based on morphological reconstruction, that are controlled by another parameter, namely, connectivity. This feature allows to assess whether two objects are touching or not. It has also been generalized to graytone images. It is a robust parameter, in image processing, and the goal of this paper is to illustrate this class of operators. The application we used to illustrate this segments and filters runways.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Vision for Helicopters and VTOL Aircraft
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pilot support system performing navigation and control tasks to guide the helicopter autonomously along a flight track based on visual machine perception is currently under development at UBM. The machine perception system uses conventional measurement data as well as CCD image sequences for state estimation, landmark/landing site recognition and tracking. The state estimates are used by a control module to perform the given guidance task. Before real flight tests can be undertaken, intensive testing and optimization of the algorithms is required; this is performed through simulation. The simulation environment allows real-time performance regarding helicopter dynamics, sensor data communication to the machine perception system, control output computation and perspectively mapped synthetic computer images representing the external world. The paper describes the simulation environment for real-time hardware-in-the-loop simulations. As many real hardware components to be used in real flight tests as possible are included within the test environment. The machine perception system design to perform sensor fusion for ego-state estimation is presented; data interfaces to the simulation environment are discussed. A combined feedback/feedforward command generation control modules uses the state estimates for guidance along the planned flight trajectory. Results from real-time simulation runs using simulation data for `ground truth' are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ASIST (Aircraft Ship Integrated Secure and Traverse) system is a second generation shipborne helicopter handling system developed by Indal Technologies Inc. (ITI). The first generation of the RAST (Recovery Assist, Securing and Traversing) system has established itself as the most successful shipborne helicopter handling system in the world, with more than 150 shipsets delivered or on order to naval forces sailing all the world's oceans. ASIST completed sea trials by July 31, 1992 and production units are in operation with the Chilean Navy. A significant feature of ASIST is the incorporation of a Helicopter Position Sensing Subsystem (HPSS) which is based on an automatic target detection technique developed at ITI. The HPSS will detect a laser beacon equipped helicopter within one second (usually 0.25 second) of it appearing in the field of view of the system cameras. The system then will track the helicopter and provide real time helicopter position relative to the landing area updated every 1/30 second until it is landed. A Rapid Securing Device (RSD) will also be driven by the position data to track the helicopter at low hover. Once the system has detected that the helicopter has landed on the deck, the RSD automatically approaches the helicopter and secures it. This occurs within two seconds. The RSD and traversing system are then used to align the helicopter with the deck tracks and manoeuvre it into a hanger, all without the need for manned intervention.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes experiments that prove the feasibility of using a video sensor for image- based landing trajectory measurement for both data reduction and auto-landing guidance purpose. Video footage recorded with a forward-looking CCD camera mounted under the nose of a Boeing 737 was analyzed post test on a specially designed data reduction workstation. The combined image analysis and photogrammetric algorithms are capable of estimating and automatically tracking the six degrees of freedom (six DOF, or roll, pitch, yaw, and position coordinates) of a rigid body from a single camera view. The six DOF tracking algorithm locates fixed features of the runway in each video frame to estimate and track the six DOF of the camera in the runway reference coordinates. Runway lights were used as the fixed features in this case. The random measurement errors without temporal smoothing were estimated from measurement results. Although the vision-based approach has much smaller systematic error than GPS or inertial system, it can use these as secondary sensors for initial acquisition and maintaining continuous track. The approach is shown to be viable for a real-time auto- landing guidance vision system employing commercially-available hardware technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A guidance and control concept is presented which features computer generated synthetic vision for approach and landing in poor visibility. The synthetic vision system provides a realistic 3D terrain image displayed to the pilot. Flight guidance symbology information is integrated into the synthetic terrain imagery. The synthetic vision system is combined with a precision navigation system. A high navigation accuracy is achieved by coupling differential global positioning and inertial sensor systems and by applying computational filter algorithms. Taxi tests were conducted with an especially equipped test vehicle in November 1993. Flight tests were performed with a research aircraft at Braunschweig airport in October 1994. The tests results show that the pilot was able to accurately control the aircraft and to perform precision approaches by using synthetic vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pilot's achievement of situation awareness (SA) remains a challenge for cockpit designers. Various mainly technology driven solutions were suggested to improve SA. Another approach is the integration of data and the presentation of predictive data. This approach was pursued by the development of a perspective, pictorial display format for transport aircraft. The 4D- Display and the accompanying Navigation-Display greatly enhance pilots' awareness regarding their situation relative to terrain, obstacles, aircraft in the vicinity and virtual elements. Since most of the information is contained in a graphical way, it can intuitively be seized by the observer. Especially in phases of high workload, the highly pre-processed information and its redundant and predictive presentation will significantly contribute to pilots' situation awareness. The displays were integrated into a fixed based flight simulator. Experiments with airline pilots showed that the perspective symbology reduced the pilot's workload and at the same time, improved the tracking of the demanded flight path. In order to provide the required graphics processing power for complex pictorial displays VDO-L develops a high performance symbol generator which meets the demands of prospective applications. The multi-processor system has been designed for modular avionics architectures and it provides full 2D and 3D capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ever increasing demand in the airline industry to reduce the costs associated with weather- related flight delays and cancellations has resulted in the need to be able to land an aircraft in low visibility. This has influenced research in recent years in the development of enhanced vision systems which allow all-weather operations, by providing both visual cues to the pilot and an independent integrity monitor. This research has focused on providing aircraft users with both enhanced performance and a cost effective landing solution with less dependence on ground systems, and has interested both the military and civil aircraft operator communities. The Autonomous Landing Guidance (ALG) system provides the capability to land in low visibility by displaying to the pilot an image of the real world without the need for an onboard Category II or III (CAT II/III) autoload system and without the associated ground facilities normally required. Besides the inherent advantage of saving the cost of expensive installations at airports, ALG also has the effect of inevitably solving the airport capacity problem, weather-related delays and diversions, and airport closures. Low visibility conditions typically cause the complete shutdown of smaller regional airports and reduces the availability of runways at major hubs, which creates a capacity problem to airlines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation and Human Factors for Avionic Synthetic Vision
In many branches of technology there is a growing demand for tools to present or visualize data or scenarios to be described in a graphical way. So the Institute of Flight Mechanics decided to develop a real-time visualization tool to show a 3D view of the flight vehicles and their movement in space. The tool was built-up modular on a scalable parallel processing system based on transputers and PowerPC's. The scalability creates the possibility to adjust the complexity and the cost of the system to the application. To achieve more photo-realism and a lower latency time the Deutsche Forschungsanstalt fur Luft- und Raumfahrt (DLR) develops together with the TNO Physics and Electronics Laboratory (Netherlands) and the Constructiones Aeronauticas (CASA, Spain) a Real-Time Simulation System (RTSS). RTSS is a generic image generation module, which can be used in the construction of simulators. RTSS has to generate images of photo-realistic quality in real-time. It will be based on a combined transputer/PowerPC hardware architecture. Both systems are used to generate synthetic vision for flight simulators and they can build a testbed for testing easily different versions of enhanced vision for pilots. We first describe the most distinctive features of RTSS and we shortly discuss its hardware and software architectures. Thereafter we give some examples of enhanced vision displays included in the visualization tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The charter of the Autonomous Landing Guidance (ALG) technology reinvestment project (TRP) is to develop a prototype system that will serve to integrate a variety of existing defense technologies to augment landing, take-off and ground operation in low-visibility conditions, particularly on runways that are not approved for category II (CAT II) or IIIa precision operations. Wright Laboratory is a member of the industry-government alliance that is developing the ALG system. They are responsible for addressing pilot-vehicle interface issues associated with the integration of ALG into aircraft cockpits. Wright Laboratory is conducting a study of the relationship between the amount and type of symbology displayed on the head- up display (HUD) during an ALG approach and its influence on pilot performance. This paper examines the driving assumptions, the analysis methods and their results, and the planned evaluation activities associated with determining the suitability of HUD symbology in an ALG context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigated whether a raster image on a Head-Up Display (HUD) might interfere with runway recognition during a low-visibility (CAT II and IIIa) approach. The primary reason for incorporating a HUD into the flight deck is to allow the pilot to observe instrument information while maintaining a view of the outside scene. The raster image could, however, obscure the outside scene, leaving the pilot unaware that the approach or runway lights are visible. In our HUD lab, twenty-one subjects were asked to observe a simulated outside scene through a HUD and indicate when they first saw runway approach lights. Each subject was presented 12 data runs with a simulated 35-GHz raster radar image and stroke symbology simultaneously presented on the HUD, and 12 data runs with only stroke symbology on the HUD. Each run was conducted under simulated fog conditions of either 700-ft Runway Visual Range (RVR) or 1200-ft RVR. We found that the presence of the radar image decreased the recognition range by 24 percent (z equals 5.71, p < 0.001). Subjective comments by the study participants show that the radar serves as a valuable aid in confirming flight path alignment with the runway under low-visibility conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perspective synthetic displays that supplement, or supplant, the optical windows traditionally used for guidance and control of aircraft are accompanied by potentially significant human factors problems related to the optical geometric conformality of the display. Such geometric conformality is broken when optical features are not in the location they would be if directly viewed through a window. This often occurs when the scene is relayed or generated from a location different from the pilot's eyepoint. However, assuming no large visual/vestibular effects, a pilot can often learn to use such a display very effectively. Important problems may arise, however, when display accuracy or consistency is compromised, and this can usually be related to geometrical discrepancies between how the synthetic visual scene behaves and how the visual scene through a window behaves. In addition to these issues, this paper examines the potentially critical problem of the disorientation that can arise when both a synthetic display and a real window are present in a flight deck, and no consistent visual interpretation is available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensing and Synthetic Vision for Cars, Boats, and Robots
The Navigation and Control Group, Missile Guidance Directorate, Research Development & Engineering Center of the U.S. Army Missile Command is conducting a program to develop and demonstrate a robust, low cost machine vision system for autonomous vehicles. This machine vision system has the requirement of providing robust classification of roads and obstacles over varying terrain, lighting, and weather. The focus of the development is to operate using a passive sensor suite of a color video camera and a black hot FLIR video camera. Machine vision algorithms have been developed and tested in a simulation environment using test sequences from video segments of various road types. This paper presents a novel approach to road and obstacle classification based on color video input. The paper begins by defining the problem and is followed by a discussion of the major functions of the simulation including the mission supervisor, the image server, the image processing algorithms, and concludes with experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned guided vehicles (UGV) require the ability to visually understand the objects contained within their operating environments in order to locally guide vehicles along a globally determined route. Several large scale programs have been funded over the past decade that have created multimillion dollar prototype vehicles incapable of functioning outside of their initial test track environment. This paper describes the Unmanned Guided Vehicle System (UGVS) developed for the US Army Missile Command for operation in natural terrain. The goal of UGVS is to develop a real-time system adaptive to a range of terrain environments (e.g. roads, open fields, wooded clearings, forest areas) and seasonal conditions (e.g., fall, winter, summer, spring). UGVS consists of two primary processing activities. First, the UGVS vision system is tasked with determining the location of gravel roads in video imagery, detecting obstacles in the vehicles path, identifying distant road spurs, and assigning a classification confidence to each image component. Second, the guidance and navigation system computes the global route the vehicle should pursue, utilizes image classification results to determine obstructions in the local vehicle path, computes navigation commands to drive the vehicle around hazardous obstacles, correlates visual road spur cues with global route digital maps, and provides the navigation commands to move the vehicle forward. Results of UGVS working in a variety terrain environments are presented to reinforce system concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A polarimetric radar navigation system makes use of a marine radar and polarization twist—grid retroreflectors in order to navigate a confined waterway, even in inclement weather or after dark. A novel vision—based processor is demonstrated that successfully uses a prioriinformation about the reflector location along the water—land boundary of the waterway. The processor, based on the CARTOON edge detection algorithm, is tuned to radar resolutions and the scale of interest for the edge boundary. A fuzzy processor performs the function of image interpretation, combining the edge map information with the primary detections to effectively remove false alarms.
Keywords : radar, marine naviagation, polarimeteric retroreflectors, machine vision, image processing, image understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many different sensors and systems, from sonar to machine vision, have been installed on ground vehicles and automobiles. This paper describes the use of radar to improve driving safety and convenience. Radars are valuable sensors for all weather operation and experiments with automotive radar sensors have been conducted for over 40 years. This paper shows the advantages and disadvantages of applying microwave and millimeter wave radar to obstacle detection and collision avoidance in a roadway environment. The performance differences between avoidance and warning sensors are discussed and a problem set is devised for a typical forward-looking collision warning application. Various radar systems have been applied to this problem that include pulse and continuous wave transceivers. These system types are evaluated as to their suitability as a collision warning sensor. The various possible solutions are reduced to a small number of candidate radar types, and one such radar was chosen for full scale development. A low cost frequency modulated/continuous wave radar system was developed for automotive collision warning. The radar is attached to the sun visor inside the vehicle, and has been in operation for over four years. The radar monitors the range and range-rate of other vehicles and obstacles, and warns the driver when it perceives that a dangerous situation is developing. A system description and measured data is presented that shows how the 24.075 to 24.175 GHz band can be used for an adequate early warning system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the use of an optical correlator with a highly coupled filter and dappled targets to track an object in a field of view cluttered by background noise and/or similar objects. The dappled targets are fractal images whose statistics are independent of scale. Each is unique for tracking the targets. We report the drop in correlation (hence recognition) of an object as a function of in-plane rotation and as a function of range. We discuss plans for an application in Johnson Space Center's Automation and Robotics group, in which correlation processing of these targets would distinguish an object and pass its position and orientation to a robot control system. Using MEDOF (minimum Euclidean distance optimal filter) to create filters on the coupled filter modulator, we show that background clutter can be optically filtered out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Obstacle detection is one of the most important work during the driving of autonomous land vehicles (ALV). It is the pre-requisite to drive ALV safely and precisely. A new method for obstacle detection which using the area parameters of certain obstacle (in 2D images) is introduced here. Taking use of the explicit determination of the velocities, with which area parameters of the obstacle change in subsequent images, this approach can get the depth of the obstacle quickly. In order to make the results more accurate, the Kalman filter has been used. The advantage of our method is practical and simple (no camera calibration needed), especially when it is applied on those mobile robots without high speed parallel computer systems. Together with a very simple manner that can recognize the landmarks beside the road, our detection measure can help ALV avoid obstacles and even can drive the vehicle according to the meaning of the certain landmark. This is useful for ALV running in complex environment. The approach introduced in this paper has been applied on the Labmate robots (equipped with a single CCD camera) produced by Transition Research Cooperation, Experiments' results indicate that our Labmate vehicle performed successfully in obstacle avoidance and topological map tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.