KEYWORDS: Very large scale integration, Roads, Detection and tracking algorithms, Visual process modeling, Video, Collision avoidance, Visual system, Sensors, Convolution, Video processing
In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.
Recognizing objects of interest in real-world scenes is a principal computational goal of the primate visual system. This process involves the representation of object surfaces in the first visual areas of the neocortex. Luminance gradients are usually superimposed on object surfaces, what complicates the recovery of the surface reflectance functions on the one hand, but may provide valuable information about surface texture and surface curvature one the other hand. Consequently, there should be a way to recognize and represent luminance gradients independently from object surfaces. However, there is no corresponding theory available up to now. Here we present a two-stage architecture which is compatible with this idea. The first
stage involves the detection of luminance gradients in a given intensity image, which are subsequently recovered in the second stage. By means of a novel diffusion paradigm, our architecture is capable of building representations of arbitrary sized luminance gradients from sparse local measurements of gradient evidence.
Since our architecture both predicts psychophysical data on Mach bands, and successfully processes real-world scenes, it constitutes a potential computational theory on how luminance gradients are processed and represented in the first visual areas of the brain.
In the central nervous systems of animals like pigeons and locusts, neurons were identified which signal objects approaching the animal on a direct collision course. In order to timely initiate escape behavior, these neurons must recognize a possible approach
(or at least differentiate it from similar but non-threatening situations), and estimate the time-to-collision (ttc). Unraveling the neural circuitry for collision avoidance, and identifying the underlying computational principles, should thus be promising for building vision-based neuromorphic architectures, which in the near future could find applications in cars or planes. Unfortunately, a corresponding computational architecture which is able to
handle real-situations (e.g. moving backgrounds, different lighting conditions) is still not available (successful collision avoidance of a robot was demonstrated only for a closed environment). Here we present two computational models for signalling impending collision.
These models are parsimonious since they possess only the minimum number of computational units which are essential to reproduce corresponding biological data. Our models show robust performance in adverse situations, such as with approaching low-contrast objects,
or with highly textured backgrounds. Furthermore, a condition is proposed under which the responses of our models match the so-called eta-function. We finally discuss which components need to be added to our model to convert it into a full-fledged real-world-environment
collision detector.
In modeling brightness perception, one problem of high biological relevance is how luminance information is transmitted into the primary visual cortex. This is especially interesting in the light of recent neurophysiological studies, which suggest that simple cells are responding shallowly to homogeneous illuminated surfaces. This indicates that simple cells possess far more functional complexity as the wide-spread notion of mere line and edge detectors. Here we present new neural circuits for modeling even and odd simple cells, capable of transmitting brightness information without using an extra `luminance- channel'. Although these circuits taken for themselves can not be regarded yet as a full brightness model, however, they might gain some insight in why the visual system is using certain processing strategies. These include e.g. the segregation in ON and OFF channels and the mutual inhibition of simple cell pairs which are in anti-phase relation. These simple cell circuits turn out to be robust against noise, and thus might find its application in a border detection scheme, beside of being a building block for a more sophisticated brightness-model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.