PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the implementation ofa mission management system for a mobile robot. It uses the concepts of perception, temporal, and spatial planning as key components in a system which must reason about route planning, task planning, and self-localisation through the use of on-board sensors. Other planning considerations include the rate at which resources, such as fuel, are consumed and how they might be replenished. Together, these constitute a multiple constraint planning problem for which an optimal solution may be very hard to find. However, sub-optimal "feasible" solutions may be acceptable, especially in a situation where a human operator is involved in setting the robot mission, and the planning system is acting as a decision aid. Our research into this problem has resulted in the development of an Object Oriented "Intelligent System Controller" (ISYC). This is implemented in C++ and supports the preparation of a robot mission, its execution and monitoring, and any re-planning which might be required as a result of a planning failure being revealed during a mission (for example, finding a route segment to be completely blocked, arriving late at a given location, failing to maintain sufficient selflocalisation accuracy to complete a mission, or running low on fuel). A planning algorithm based on a guided depth-first search through a weighted graph is used to implement much of the ISYC. Bayesian decision analysis is used as the basis of the guiding. We begin by presenting some background to this work, including a brief description of some architectures for the control of intelligent mobile robots. The ISYC architecture used in our work is then described in some detail. Next, the multiple constraint planning problem is introduced, and our approach to solving it is discussed. Finally, we describe the application of the resulting intelligent control system to the preparation and monitoring of missions for a material transport robot, and present some conclusions and some suggestions for further research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The local navigation problem for autonomous mobile robots (AMRs) and its application to wheeled robots is addressed. The problem of driving an AMR to a goal in an unknown environment, containing both stationary as well as moving obstacles, is formulated as a dynamic feedback control problem. An algorithm using local feedback information to generate subgoals for driving the AMR along a collision-free trajectory to the goal is adopted. The local free-space for subgoal selections is constructed taking into account the locally visible obstacles and the AMR operating limits. A dynamic model of wheeled robots based on driving and steering mechanisms is derived. A controller design based on a self-tuning pole assignment approach is presented for motion reference trajectory tracking. Integration of local sensor data, system dynamics and operating constraints with a developed decision support system, for steering and control, is performed to produce the appropriate intelligent navigation decisions. Finally, the effectiveness of the navigation and control strategies in directing the AMR along a collision-free trajectory to the final goal in a finite time, is illustrated, by simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a dynamical control method where a trajectory on a chaotic attractor is directed by small perturbations towards a chosen unstable set. This method is alternative to the classical OttGrebogi- Yorke control procedure. Our approach is based on the discrete and continuous maximum principles and optimizes the mean time to achieve control even at large distances from the desired state. The proposed method is tested both in simple models (one-dimensional and two-dimensional maps) and in multidimensional systems (complex Ginzburg-Landau equations). A possible decrease of the strange attractor dimension by means of this method is also discussed.
Keywords: controlling chaos, optimal control, strange attractor, multistep algorithms
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been demonstrated recently that images obtained under varying focal lengths contain information on the 3D structure of the scene. This depth-from-zoom approach relies on the knowledge of `center of zoom'. Under zoom, image flow expands radially outward from the image center. In real lenses, however, this center is itself moving on a trajectory determined by the zoom action as well as lens manufacture. Therefore, optic flow vectors will follow an arbitrary curvilinear path in the image plane. In this paper we develop a general model for the determination of image flow given a corresponding locus for the center of zoom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A critical component of unmanned aerial vehicles in the navigation system which provides position and velocity feedback for autonomous control. The Georgia Tech Aerial Robotics navigational system (NavSys) consists of four DVTStinger70C Integrated Vision Units (IVUs) with CCD-based panning platforms, software, and a fiducial onboard the vehicle. The IVUs independently scan for the retro-reflective bar-code fiducial while the NavSys image processing software performs a gradient threshold followed by a image search localization of three vertical bar-code lines. Using the (x,y) image coordinate and CCD angle, the NavSys triangulates the fiducial's (x,y) position, differentiates for velocity, and relays the information to the helicopter controller, which independently determines the z direction with an onboard altimeter. System flexibility is demonstrated by recognition of different fiducial shapes, night and day time operation, and is being extended to on-board and off-board navigation of aerial and ground vehicles. The navigation design provides a real-time, inexpensive, and effective system for determining the (x,y) position of the aerial vehicle with updates generated every 51 ms (19.6 Hz) at an accuracy of approximately +/- 2.8 in.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robot technology is spreading its use in the development of advance manufacturing systems. Methods of multi-sensory fusion data with vision, sonar and limit switches have been developed as the most flexible, but expensive approaches. Other approaches are more common such as buried wire AGV's. They decrease the cost of the mobile robot, but degrade the flexibility of the navigation system as well. This paper uses neural networks (NNs) with only one camera to obtain similar flexibility as the high cost approaches, but in a cost-efficient way. The NNs use translation and perspective information of features in images to determine the proper alignment and position of the mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of objects in images is a complex problem in computer vision for which many approaches have been tried but for which a general solution has not yet been achieved.' However, there are many practical applications where a successful visual recognition system could provide potentially useful information about its environment, for example, the detection of regions of interest and objects for a target recognition system,24 and the identification of roads or drivable regions4'5 in autonomous vehicle applications. The robust and practical solution of this problem therefore remains an important goal of computer vision research. One well-known approach to object recognition is to define three-dimensional wire frame models of the objects to be recognised and to search for matches between model features and image features.6'7 A verification process is needed in order to choose the correct match from amongst the several hypothetical matches that are likely to be found for each model. In principle, this approach can be extended to deal with recognising an arbitrary number of objects although often at the expense of considerable computational complexity.1 This hypothesis-verification paradigm is characteristic of model-based techniques of object recognition. An alternative approach may be to train a staiisücal classifier such as a neural network to recognise objects from a set of representative examples extracted from two-dimensional images. Such a system may therefore obviate some of the need for manual intervention in the design of object models. Furthermore, since the training of the network proceeds using data derived from the whole of the image and which may preserve the contextual relationships between one candidate object and another in the scene, the network might also learn the contextual significance of objects in an image.8 This may be particularly relevant when the task is to label large regions in an image, such as roads, or even the whole of the image.9'10 Consider the specific problem of labelling road regions segmented from images of outdoor scenes. This is a particularly difficult recognition problem since overall there are potentially very many objects and regions, which may be subject to occlusion and also to variable illumination conditions. A single object may give rise to many regions in the image, or a single region may include parts of a number of different objects. Indeed, individual regions may at best be produced as the result of shadows or other accidents of shading, or at worst as artifacts of the algorithm used to segment them from an image. Nevertheless, the feasibility of using a neural network to label such regions was originally demonstrated by Wright8: an MLP network was used to classify two region types (road and nol_road) from automatic segmentations of thirty two different black-and-white images of road scenes. The MLP network was trained to generate the correct label for regions segmented from a previously unseen image taking as its inputs a number of features generated from that region and neighbouring regions. Although limited, the performance of this system was shown to exceed significantly that of a K Nearest Neighbour classifier on the same data. Recent work by Mackeown et al.9 has extended this approach to a larger number of images and a more comprehensive label set. The research which is described here was undertaken with the specific aim of finding a tight lower bound on the best achievable performance of road finding using a neural network. The method has been generalised to use colour image data, which forms part of a new database of two hundred and forty 24-bit colour images produced under carefully monitored conditions. This is a significantly more extensive and better understood dataset than that used originally by Wright, and has made it possible to perform a more detailed quantitative performance evaluation of this approach. To separate the problems of segmentation (locating regions of interest) and classification (labelling the located regions), and to allow us to concentrate on the problem of classification, the idea of an "ideal segmentation" is used here.9"° This also allows us to address the issues of the accessibility and sufficiency of the information contained in region-based segmentations of single images for whole-scene classification tasks. The image database is described briefly in section 2. The "ideal segmentation" , and the approximation to this process used in practice, are explained in section 3. The labelling process is also described at this point. A number of geometrical and statistical features are calculated from the regions generated by the segmentation process, and these are described in section 4. These features are used by the network to classify (i.e. label) the regions, so the performance of the classifier is a function of their information content. The structure of the neural network and the method used to train it are presented in section 5. The results obtained from this system are discussed in section 6, in which new performance figures are given for a network trained specifically on the problem of labelling image regions as either 'Road' or 'NoLroad'. Finally, the conclusions to be drawn from this work are given in section 7.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Planetary space exploration by unmanned missions strongly relies on automatic navigation. Computer vision has been recognized as a key to the feasibility of robust navigation, landing site identification and hazard avoidance. We are studying a scenario, where remote sensing methods from the orbit around the planet are used to preselect a landing site. The accuracy of the atmospheric entry is restricted by various parameters. One area of uncertainty results from inexact estimation of the landing position. The touch down point must be located an elliptic image area which is called the `ellipsis of uncertainty'. During landing, the early recognition of the preselected landing site in this image is an important factor. It improves the probability for a successful touchdown, since it allows real-time corrections of the trajectory to reach the planned touch down spot. We present a scenario that uses computer vision methods for this early identification emphasizing the phase between ten kilometers from ground and the identification of the lander position relative to the selected landing site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An Intelligent Mobile Sensing System (IMSS) has been developed for the automated inspection of radioactive and hazardous waste storage containers in warehouse facilities at Department of Energy sites. A 2D space of control modes was used that provides a combined view of reactive and planning approaches wherein a 2D situation space is defined by dimensions representing the predictability of the agent's task environment and the constraint imposed by its goals. In this sense selection of appropriate systems for planning, navigation, and control depends on the problem at hand. The IMSS vehicle navigation system is based on a combination of feature based motion, landmark sightings, and an a priori logical map of the mockup storage facility. Motion for the inspection activities are composed of different interactions of several available control modes, several obstacle avoidance modes, and several feature identification modes. Features used to drive these behaviors are both visual and acoustic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial maps are abstract representations of the environment for navigation purposes. Many of these maps are too detailed and too large to be useful for small, low cost mobile robots, viz., tetherless, autonomous robots each occupying a volume of about a cubic foot or less. This paper explores cheaper alternatives of spatial maps. The key components of spatial maps are the locally detectable features, the spatial relationships between the features, the groupings of these features for forming a local region, and networks of local regions to form larger regions. Regions, local or otherwise, are graphs with nodes representing local regions and arcs representing the spatial relationship between the regions. To achieve a compact representation, restrictions are placed on the number of kinds of locally detectable features, the possible kinds of spatial relationship, and the total number nodes possible in a graph. A compact spatial map for small mobile robots navigating in a hallway environment is discussed. A small number of features are needed to represent local neighborhoods in such environments. This and other factors make it possible to use compact spatial maps to represent such environments. Extending such maps to handle more general cases is possible in many situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper is on operations for semi-autonomous mobile robots. The robot is assumed to be remotely controlled by an operator. It is difficult for an operator to directly control the robot, especially over a communication link with low bandwidth and/or time delays. Therefore it is necessary to close the control loop in the robot with the operator giving high level commands. The operator is still needed as the fully autonomous robot does not exist today, except for limited scenarios. The scene around the robot is sensed using a scanning range measuring laser and a camera. The high level scene interpretation is done by the operator who also does the high level planning. Which operations the robot are to perform is indicated by pointing in the images or in a map created by the robot. Operations are functions that the robot can perform autonomously. They can be simple, like `Travel 2 m ahead', or more advanced, like `Follow the corridor and take the first door to the right'. Some typical operations are: (1) Lock the heading of the robot when driving on a `straight' line and preprogrammed 90 and 180 turns. (2) Automatically enter a camera defined line; the direction of the camera is used to drop a new coordinate frame at any time (using a knob on the keyboard). The robot will automatically enter this new line and also compensate for the overshoot. (3) Travelling along corridors. The operation is both robust and precise. The precision is about 1 cm at 1 m/s and the robot is not disturbed by people passing it in the corridor. (4) Command for passage through a door works within 1 cm and 0.5 degrees at a speed of 0.5 m/s. The range weighted Hough Transform on laser measurements extracts the walls in an indoor environment. This is used to create an internal map in the robot which is used for operations like corridor following or passing through doors. The map is also useful when presenting information to the operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
lanning and scheduling problems are ubiquitous. Planning usually refers to deciding whether a certain activity is a viable enterprise: what resources are required, where do they come from, what do they cost, what peculiar characteristics exist, etc. Scheduling, on the other hand, usually refers to allocating resources to tasks at specific times (eg: time lines). There is a sliding scale between the two extremes. Planning emphasizes the feasibility and value of the overall job, while scheduling addresses the exact sequencing of events. We will use the term "planning" to cover the whole spectrum, but like many authors we will occasionally use the terms interchangeably.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the two-frame homing problem addressed by this paper, a robot is given (1) the bearings of the landmarks at the robot's current location, (2) the bearings of the landmarks at the target location and (3) the correspondences between landmarks across the two locations; it is required to make an admissible movement--one that takes it closer to the target location. Previous papers have described how to carry out homing when landmark bearings are exactly known (even when some landmark correspondences are incorrect). This paper extends these results to the case where landmark bearings are not measured exactly, but only within a bounded tolerance. An algorithm for computing admissible movements from inexact landmarks is given; this algorithm is provably correct and optimal. The effect of inexact landmark bearings on the robot's ability to detect inconsistent landmark correspondences is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a solution to the problem of position and direction estimation for Autonomous Land Vehicles which is based on the Error Least Squares Method. Prior knowledge, in the form of map data is projected into the image space using the assumed position and direction, and is matched with the image data to obtain an error function. We solve the function with the initial position data from the Global Position System (GPS). The GPS receives a satellite signal and computes the location with errors which, in normal conditions are of the order of ten meters. At present the system has been tested using data from natural environments, and will converge to the correct position satisfactorily, provided that the initial error is within a small region which is still a little beyond the precision of the GPS data. We are currently investigating the reasons why it diverges, and at the same time using different technologies in computer vision to increase the region of convergence so that it is within the GPS data accuracy. In this paper, we describe the exterior orientation problem, its mathematical model and the experimental test using data from natural environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous mobile robot is an intelligent machine that is capable of navigating in an obstacle- cluttered environment without collision and without human intervention. The robot uses information provided by its sensors to build a map using graph or grid representation. In this paper the environment is spatial and temporally varying. To find a safe path among obstacles, a map must be created and updated each time an obstacle is encountered. We present an algorithmic work based on virtual force field concept for obstacle detection and avoidance. As a result a collision free path between a start point and a goal position through a flat 2D environment is obtained. The obstacle avoidance technique generates a sequence of speed and steering commands for a vehicle with two front wheels separately driven using two dc motors. A new cross coupled computer control scheme for the two drive motors of the vehicle wheels is proposed. The continuous motion of the vehicle is executed using this control scheme. The new control scheme accounts for drives mismatch. Extensive simulation results are conducted to illustrate the useful aspects of the proposed integrated system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The military has an anticipated need for a remotely controlled ground system to perform reconnaissance; surveillance; target acquisition; patrolling; and nuclear, biological, and chemical (NBC) detection. In particular, the U.S. Army Infantry School would like the system to operate in the most dangerous areas of the modeM battlefield—open terrain that is highly trafficable. This has led to the premise that the system should be fast. Also, discovering the enemy's location is often dangerous with the cost assessed in human lives. From these requirements emerged the Unmanned Ground Vehicle (UGV) programs. Emphasis is on effective robotic technology that has multiservice applications and is unique to unmanned vehicles on the gr1 The maturity of these robotic technologies was started to be confirmed in (1) UGV Demo I, held in 1992; and will continue to be confirmed in (2) UGV Demo II, scheduled in 1996. UGV Demo I focused primarily on teleoperation, while the UGV Demo II is designed to complement the first demonstration by focusing on supervised autonomy. The primary goal of the UGV Demo II program is to demonstrate the utility of advanced UGV systems to conduct tasks that enhance the Department of Defense force structure. This demonstration will combine both offensive and defensive operations in a militarily relevant situation. For offensive operation, four cooperating UGVs will initiate a movement-tocontact scenario. The vehicles will conduct a screening operation for a manned force using bounding overwatch over semiarid terrain. Once in overwatch positions, they will use a reconnaissance, surveillance, and target acquisition (RSTA) mission module to observe threats; and to locate, detect, assess, and designate threats for indirect fire. For defensive operation, the vehicles will conduct a retrograde scenario. Four cooperating UGVs will screen a manned force by sequentially occupying preplanned defensive positions to maximize damage to advancing enemy forces. Once the commander determines that enemy forces have been weakened, the vehicles will move to preplanned locations in the main battle area to help designate remnants of advancing enemy forces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Legged robots represent an alternative to the wheeled or banded platforms and several reasons can justify the development of such architectures. We previously realized two electro- pneumatical robots. We chose to develop an outdoor six-legged electro-hydraulic robot. Our aim was to provide an autonomy of 30 minutes and the capacity to move on terrains with obstacles (maximum 20 cm) and with up- or down-slopes of maximum 30 degree(s). A specific simulation software was used to design the general structure and the legs. The particular design features of the robot lie in a cross-architecture and the inversion of the rear legs. The platform is controlled by a Programmable Logic Controller coupled with the necessary electronic interfaces allowing the teleoperation and the transmission of video signals. Several gaits have been implemented in order to adapt the walk to the environment. Their stability and performances are related.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While there is a lot of recent development in the entire IVHS field, very few have had the opportunity to combine the many areas of development into a single integrated `intelligent' unmanned vehicle. One of our systems was developed specifically to serve a major automobile manufacturer's need for an automated vehicle chassis durability test facility. Due to the severity of the road surface human drivers could not be used. A totally automated robotic vehicle driver and guidance system was necessary. In order to deliver fixed price commercial projects now, it was apparent system and component costs were of paramount importance. Cyplex has developed a robust, cost effective single wire guidance system. This system has inherent advantages in system simplicity. Multi-signal (per vehicle lane) systems complicate path planning and layout when multiple lanes and lane changes are required, as on actual highways. The system has demonstrated high enough immunity to rain and light snow cover that normal safety reductions in speed are adequate to stay within the required system performance envelope. This system and it's antenna interface have shown the ability to guide the vehicle at slow speeds (10 MPH) with a tracking repeatability of plus or minus 1/8 of an inch. The basic guide and antenna system has been tested at speeds up to 80 mph. The system has inherently superior abilities for lane changes and precision vehicle placement. The operation of this system will be described and the impact of a system that is commercially viable now for highway and off road use will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an analysis of stopping distances for an unmanned ground vehicle achievable with selected ladar and stereo video sensors. Based on a stop-to-avoid response to detected obstacles, current passive stereo technology and existing ladars provide equivalent safe driving speeds. Only a proposed high-resolution ladar can detect small (8 inch) obstacles far enough ahead to allow driving speeds in excess of 10 miles per hour. The stopping distance analysis relates safe vehicle velocity to obstacle and sensor pixel sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the framework of the EUREKA PROMETHEUS European Project, a Mobile Laboratory (MOBLAB) has been equipped for studying, implementing and testing real-time algorithms which monitor the path of a vehicle moving on roads. Its goal is the evaluation of systems suitable to map the position of the vehicle within the environment where it moves, to detect obstacles, to estimate motion, to plan the path and to warn the driver about unsafe conditions. MOBLAB has been built with the financial support of the National Research Council and will be shared with teams working in the PROMETHEUS Project. It consists of a van equipped with an autonomous power supply, a real-time image processing system, workstations and PCs, B/W and color TV cameras, and TV equipment. This paper describes the laboratory outline and presents the computer vision system and the strategies that have been studied and are being developed at I.E.N. `Galileo Ferraris'. The system is based on several tasks that cooperate to integrate information gathered from different processes and sources of knowledge. Some preliminary results are presented showing the performances of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sedan Mercedes 500 SEL has been equipped with active bifocal vision systems. Road and object recognition is performed both in a look-ahead and in a look-back region; this allows to servo-maintain an internal representation of the entire situation around in the vehicle using the 4D approach to dynamic machine vision. Obstacles are detected and tracked both in the forward and in the backward viewing range up to about 100 meters distance; up to 5 objects may be tracked in parallel in each hemisphere. A fixation type viewing direction control with the capability of saccadic shifts for attention focusing has been developed. The overall system comprises about 5 dozen transputers T-222 (16-bit, for image processing and communication) and T-800 (32-bit, for number crunching and knowledge processing) plus a PC as transputer host. A description of the parallel processing architecture is given allowing frequent data driven bottom-up and model driven top-down integration steps for efficient and robust object tracking (4D approach).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The MDARS security robotics program has successfully demonstrated the simultaneous control of multiple robots autonomously navigating within an industrial warehouse environment. This real-world warehouse system installation required adapting a navigational paradigm designed for highly structured environments such as office corridors (with smooth walls and regularly spaced doorways) to a semi-structured warehouse environment (with few walls and within which odd-shaped objects unpredictably move about from day to day). A number of challenges, some expected and others unexpected, were encountered during this transfer of the system to the test/demonstration site. This paper examines these problems (and others previously encountered) in the historical context of the ongoing development of the navigation and other technologies needed to support the operations of a security robotic system, and the evolution of these technologies from the research lab to an operational warehouse environment. A key lesson is that a system's robustness can only be ensured by exercising its capabilities in a number of diverse operating environments, in order to (1) uncover latent system hardware deficiencies and software implementation errors not manifested in the initial system hardware or initial development environment; and (2) identify sensor modes or processing algorithms tuned too tightly to the specific characteristics of the initial development environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an experiment which simulated many aspects of control of a remote vehicle on another planetary surface. We have developed a Telepresence-controlled Remotely Operated underwater Vehicle (TROV) and used it to perform scientific exploration in an ice-covered marine environment near McMurdo Station, Antarctica. The goal of the mission was to use telepresence and virtual reality technology to operate a remote vehicle to perform a scientific study of the marine environment under the sea ice in Antarctica. The TROV was operated both locally, from a habitat building located on the sea ice above a dive hole through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using a control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link used either a stereo display monitor similar to that used locally, or a stereo head-mounted head- tracked display. The remote operators could also view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The actual vehicle was driven either from within the virtual environment or by watching stereo video. Satellite communication was used to transmit stereo video from the TROV to NASA Ames and to provide a bi-directional Internet link to the TROV control computer for command and telemetry signals. All vehicle functions could be controlled remotely over the satellite link. The TROV was operated in Antarctica nearly continuously using both local and remote control for 7 weeks. The results of our experiments suggest that surface rovers using control technology with real time telepresence could vastly expand the range of human exploration from a human base on the Moon or Mars. Planetary surface rovers can also be controlled from Earth, although there is a significant time delay for such control for any planet other than the Moon. The use of virtual reality in the control interface will significantly improve the capabilities of remote rovers controlled from Earth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cost optimization will be essential to the economic viability of future many-robot systems. Unfortunately, cost-optimal design will often depend sensitively on the performance and costs of candidate sensor and other components, and on both explicit mission-specific and implicit common sense application requirements. The development of a standardized integration architecture providing multi-level protocols to support the flexible but tight integration of off the shelf subsystem components into low cost robots will facilitate the development of this soon to emerge class of systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will discuss components of a modular hardware and software architecture for mobile robots that supports both teleoperation and autonomous control. The Modular Autonomous Robot System architecture enables rapid development of control systems for unmanned vehicles for a wide variety of commercial and military applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel Kalman filter-based scheme utilizing the output of RF transponders is proposed to estimate the navigational states of a Mars microrover. The novelty of the scheme stems from (a) the use of pseudo-measurements that are a nonlinear function of range measurements such that the measurement model is linear and the resulting Kalman filter globally convergent, (b) the measurements of the sums and the differences in the ranges so as to reduce the sensitivity to the measurement noise and (c) optimization of RF element locations to ensure that the state estimation error is minimized resulting in the estimation accuracy being linear in the true states. Both bias and random errors are considered to account for the error sources such as sensor bias, faults, multi-path, and sensor noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.