Intelligent vehicles are beginning to appear on the market, but so far their sensing and warning functions only work on the open road. Functions such as runoff-road warning or adaptive cruise control are designed for the uncluttered environments of open highways. We are working on the much more difficult problem of sensing and driver interfaces for driving in urban areas. We need to sense cars and pedestrians and curbs and fire plugs and bicycles and lamp posts; we need to predict the paths of our own vehicle and of other moving objects; and we need to decide when to issue alerts or warnings to both the driver of our own vehicle and (potentially) to nearby pedestrians. No single sensor is currently able to detect and track all relevant objects. We are working with radar, ladar, stereo vision, and a novel light-stripe range sensor. We have installed a subset of these sensors on a city bus, driving through the streets of Pittsburgh on its normal runs. We are using different kinds of data fusion for different subsets of sensors, plus a coordinating framework for mapping objects at an abstract level.
Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.
Sensor technology plays a critical role in the operation of the Automated Highway System (AHS). The proposed concepts depend on a variety of sensors for positioning, lane- tracking, range and vehicle proximity. Since large substations of the AHS will be designed and evaluated in simulation before deployment, it is important that simulators make realistic sensor assumptions. Unfortunately, the current physical sensor models are inadequate for this task since they require detailed world state information that is unavailable in a simulated environment. In this paper, we present an open-ended, functional sensor hierarchy, incorporating geometric models and abstract noise characteristics, which can be used directly with current AHS tools. These models capture the aspects of sensing technology that are important to AHS concept design such as occlusion, and field of view restrictions, while ignoring physical-level details such as electromagnetic sensor reflections. Since the functional sensor models operate at the same level of granularity as the simulation platform, complete integration is assured. The hierarchy classifies sensors into functional groups. The models at a particular level incorporate characteristics that are common to all sensors in its subgroups. For example, range sensors have a parameter corresponding to a maximum effective range, while lane-trackers include information pertaining to lateral accuracy.
In independent vehicle concepts for the Automated Highway System (AHS), the ability to make competent tactical-level decisions in real-time is crucial. Traditional approaches to tactical reasoning typically involve the implementation of large monolithic systems, such as decision trees or finite state machines. However, as the complexity of the environment grows, the unforeseen interactions between components can make modifications to such systems very challenging. For example, changing an overtaking behavior may require several, non-local changes to car-following, lane changing and gap acceptance rules. This paper presents a distributed solution to the problem. PolySAPIENT consists of a collection of autonomous modules, each specializing in a particular aspect of the driving task - classified by traffic entities rather than tactical behavior. Thus, the influence of the vehicle ahead on the available actions is managed by one reasoning object, while the implications of an approaching exit are managed by another. The independent recommendations form these reasoning objects are expressed in the form of votes and vetos over a 'tactical action space', and are resolved by a voting arbiter. This local independence enables PolySAPIENT reasoning objects to be developed independently, using a heterogenous implementation. PolySAPIENT vehicles are implemented in the SHIVA tactical highway simulator, whose vehicles are based on the Carnegie Mellon Navlab robots.
We present a real-time model-based vision approach for detecting and tracking vehicles from a moving platform. It was developed in the context of the CMU Navlab project and is intended to provide the Navlabs with situational awareness in mixed traffic. Tracking is done by combining a simple image processing techniques with a 3D extended Kalman filter and a measurement equation that projects from the 3D model to image space. No ground plane assumption is made. The resulting system runs at frame rate or higher, and produces excellent estimates of road curvature, distance to and relative speed of a tracked vehicle. We have complemented the tracker with a novel machine learning based algorithm for car detection, the CANSS algorithm, which serves to initialize tracking.
Mobile robot architectures have been based on many different design principles: AI, control theory, hierarchical organization, etc. Brooks argues for a `subsumption' approach, based on layers of very simple, real-time computations. The CMU Navlab project takes a more pragmatic approach. The bottom layer is real-time, based on local coordinates, with no high- level models or central data structures to be bottlenecks. But the architectural tools developed for the Navlab also provide hooks for a higher level, based in world coordinates and using AI planning, to control the lower layer.
SCARF is a color vision system that can detect roads in difficult situations. The results of this system are used to drive a robot vehicle the Navlab on a variety of roads in many different weather cOnditions. Specifically SCARF has recognized roads that have degraded surfaces and edges with no lane markings in difficult shadow conditions. Also it can recognize intersections with or without predictions from the navigation system. This is the first system to be able to detect intersections in color images without a priori knowledge of the intersection shape and location. SCARF uses Bayesian classification a standard pattern recognition technique to determine a road surface likelihood for each pixel in a reduced color image. It then evaluates a number of road and intersection candidates by matching an ideal road surface probability image with the results from the Bayesian classification. The best matching candidate is passed to a simple path planning system which navigates the robot vehicle on the road or intersection. This paper describes the SCARF system in detail and presents some results on a variety of images and discusses the Navlab test runs using SCARF.
This paper describes the strucwre implementation and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation path specification and hacking human interfaces fast communication and multiple client support The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Naviab autonomous vehicle. In addition performance results from positioning and tracking systems are reported and analyzed.
Autonomous road following is a domain which spans a range of complexity from poorly defined unmarked dirt roads to well defined well marked highly struc-. tured highways. The YARF system (for Yet Another Road Follower) is designed to operate in the middle of this range of complexity driving on urban streets. Our research program has focused on the use of feature- and situation-specific segmentation techniques driven by an explicit model of the appearance and geometry of the road features in the environment. We report results in robust detection of white and yellow painted stripes fitting a road model to detected feature locations to determine vehicle position and local road geometry and automatic location of road features in an initial image. We also describe our planned extensions to include intersection navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.