PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8045, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the systems, hardware, and software engineering efforts required to overcome the challenges of
operating autonomously around dynamic objects in complex environments. To detect these dynamic objects, the
SOURCE ATO will utilize ARL/GDRS developed moving obstacle detection algorithms that will run on the
Autonomous Navigation System (ANS) hardware.1 These algorithms use data from multiple sensors including laser
detection and ranging (LADAR), Electro-optic, and Millimeter-Wave Radar (MMWR) to produce detections. This limits
erroneous identifications that occur when using only one sensor. This paper describes co-development of Safe Operation
Technologies between the SOURCE ATO and the ANS development program. This approach allows a more rapid
development cycle, which will enable both current and future ground combat vehicle systems the flexibility to readily
adopt emerging software, process hardware, and sensor technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned
ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without
emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a
requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR
cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5μm) or long-wave infrared (LWIR)
radiation (7-14μm). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become
viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair
of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection,
pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have
evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and
rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard
detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color
cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we
summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a
calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The real world is too complex and variable to directly program an autonomous ground robot's control system to respond
to the inputs from its environmental sensors such as LIDAR and video. The need for learning incrementally, discarding
prior data, is important because of the vast amount of data that can be generated by these sensors. This is crucial because
the system needs to generate and update its internal models in real-time. There should be little difference between the
training and execution phases; the system should be continually learning, or engaged in "life-long learning". This paper
explores research into incremental learning systems such as nearest neighbor, Bayesian classifiers, and fuzzy c-means
clustering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-Agent Tactical Sentry Unmanned Ground Vehicle, developed at Defence R&D Canada - Suffield, has
been in service with the Canadian Forces for six years. This tele-operated wheeled vehicle provides a capability
for point detection of chemical, biological, radiological, and nuclear agents.
During the service life of this system, it has become apparent that a means of automatically detecting obstacles
in tele-operated and semi-autonomous modes would greatly increase the safety and reliability of the vehicle in
cluttered or human occupied operating environments. This paper documents the design of such a system based
on a 24 GHz automotive radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With increasingly available high frequency radar components, the practicality of imaging radar for mobile robotic
applications is now practical. Navigation, ODOA, situational awareness and safety applications can be supported in
small light weight packaging. Radar has the additional advantage of being able sense through aerosols, smoke and dust
that can be difficult for many optical systems. The ability to directly measure the range rate of an object is also an
advantage in radar applications. This paper will explore the applicability of high frequency imaging radar for mobile
robotics and examine a W-band 360 degree imaging radar prototype. Indoor and outdoor performance data will be
analyzed and evaluated for applicability to navigation and situational awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an object detection and classification method for an Unmanned Ground Vehicle (UGV) using a
range sensor and an image sensor. The range sensor and the image sensor are a 3D Light Detection And Ranging
(LIDAR) sensor and a monocular camera, respectively. For safe driving of the UGV, pedestrians and cars should be
detected on their moving routes of the vehicle. An object detection and classification techniques based on only a camera
has an inherent problem. On the view point of detection with a camera, a certain algorithm should extract features and
compare them with full input image data. The input image has a lot of information as object and environment. It is hard
to make a decision of the classification. The image should have only one reliable object information to solve the problem.
In this paper, we introduce a developed 3D LIDAR sensor and apply a fusion method both 3D LIDAR data and camera
data. We describe a 3D LIDAR sensor which is developed by LG Innotek Consortium in Korea, named KIDAR-B25.
The 3D LIDAR sensor detects objects, determines the object's Region of Interest (ROI) based on 3D information and
sends it into a camera region for classification. In the 3D LIDAR domain, we recognize breakpoints using Kalman filter
and then make a cluster using a line segment method to determine an object's ROI. In the image domain, we extract the
object's feature data from the ROI region using a Haar-like feature method. Finally it is classified as a pedestrian or car
using a trained database with an Adaboost algorithm. To verify our system, we make an experiment on the performance
of our system which is mounted on a ground vehicle, through field tests in an urban area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes our results to date on the Aladdin project, an ongoing effort to enable small UGVs to open doors
semi-autonomously. Our system consists of a modular general-purpose gripper and software that provides semiautonomous
capabilities. The gripper features compliant elements which simplify operations such as turning a doorknob
and opening a door; this gripper can be retrofitted onto existing general-purpose robotic manipulators without extensive
hardware modifications. The software provides semi-autonomous door opening capability through an OCU; these
capabilities are focused on targeting and reaching for a doorknob, a subtask that our initial testing showed would provide
the greatest improvement in door opening operations. This paper describes our system and the results of our evaluations
on the door opening task. We continue to develop both the hardware and software with the ultimate goal of fully
autonomous door-opening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of military Unmanned Ground Vehicles (UGV's), military units are forced to sweep largely populated cities
and towns in search of hostile enemies. These urban types of operations are referred to as MOUT (Military Operations
on Urban Terrain). During urban operations, these UGV's encounter difficulties when opening doors. Current
manipulator end effectors have these difficulties, because they are not designed to mimic human hand operations.
This paper explains the mechanical nature of the Modular Universal Door Opening End-effector (MUDOE). MUDOE is
a result of our development research to improve robotic manipulators ability to negotiate closed doors. The presented
solution has the ability to mimic human hand characteristics when opening doors. The end-effector possesses an ability
to maintain a high Degree of Freedom (DoF), and grasp the doorknob by applying equally distributed forces to all points
of contact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under a research effort sponsored by the U.S. Army Tank Automotive Research, Development, and Engineering Center
(TARDEC), we are exploring technologies that can be used to provide an operator with the ability to more intuitively
control high-degree of freedom arms while providing the operator with haptic feedback to more effectively interact with
the environment. This paper highlights the results of the research as well as early test results on a number of prototype
systems currently in development. We will demonstrate advantages and disadvantages of some of the leading approaches
to intuitive control and haptic feedback.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Self-Organizing, Collaborative, and Unmanned ISR Robots: Joint Session with Conference 8062
Brian K. Funk, Jonathan C. Castelli, Adam S. Watkins, Christopher B. McCubbin, Steven J. Marshall, Jeffrey D. Barton, Andrew J. Newman, Cammy K. Peterson, Jonathan T. DeSena, et al.
The Johns Hopkins University Applied Physics Laboratory deployed and demonstrated a prototype Cooperative Hunter
Killer (CHK) Unmanned Aerial System (UAS) capability and a prototype Upstream Data Fusion (UDF) capability as
participants in the Joint Expeditionary Force Experiment 2010 in April 2010. The CHK capability was deployed at the
Nevada Test and Training Range to prosecute a convoy protection operational thread. It used mission-level autonomy
(MLA) software applied to a networked swarm of three Raven hunter UAS and a Procerus Miracle surrogate killer UAS,
all equipped with full motion video (FMV). The MLA software provides the capability for the hunter-killer swarm to
autonomously search an area or road network, divide the search area, deconflict flight paths, and maintain line of sight
communications with mobile ground stations. It also provides an interface for an operator to designate a threat and
initiate automatic engagement of the target by the killer UAS. The UDF prototype was deployed at the Maritime
Operations Center at Commander Second Fleet, Naval Station Norfolk to provide intelligence analysts and the ISR
commander with a common fused track picture from the available FMV sources. It consisted of a video exploitation
component that automatically detected moving objects, a multiple hypothesis tracker that fused all of the detection data
to produce a common track picture, and a display and user interface component that visualized the common track picture
along with appropriate geospatial information such as maps and terrain as well as target coordinates and the source
video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial systems (UAS) have proven themselves to be indispensable in providing intelligence, surveillance, and
reconnaissance (ISR) over the battlefield. Constellations of heterogeneous, multi-purpose UAS are being tasked to
provide ISR in an unpredictable environment. This necessitates the dynamic replanning of critical missions as weather
conditions change, new observation targets are identified, aircraft are lost or equipment malfunctions, and new airspace
restrictions are introduced. We present a method to generate coordinated mission plans for constellations of UAS with
multiple flight goals and potentially competing objectives, and update them on demand as the operational situation
changes. We use a fast evolutionary algorithm-based, multi-objective optimization technique. The updated flight routes
maintain continuity by considering where the ISR assets have already flown and where they still need to go. Both the
initial planning and replanning take into account factors such as area of analysis coverage, restricted operating zones,
maximum control station range, adverse weather effects, military terrain value, and sensor performance. Our results
demonstrate that by constraining the space of potential solutions using an intelligently-formed air maneuver network
with a subset of potential airspace corridors and navigational waypoints, we can ensure global optimization for multiple
objectives considering the situation both before and after the replanning is initiated. We employ sophisticated
visualization techniques using a geographic information system to help the user 'look under the hood" of the algorithms
to understand the effectiveness and viability of the generated ISR mission plans and identify potential gaps in coverage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military and other national security agencies have been denied unfettered access to the National Air Space (NAS)
because their unmanned aircraft lack a reliable and effective collision avoidance capability. To overcome the constraints
imposed on UASs use of the NAS, a new, conformable collision avoidance system has been developed - one that will be
effective in all flyable weather conditions, overcoming the shortfalls of other sensing systems. Upon implementation this
system will achieve collision avoidance capability for UASs deployed for national security purposes and will allow
expansion of UAS usage for commercial or other civil purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years research into legged locomotion across extreme terrains has increased. Much of this work was
done under the DARPA Learning Legged Locomotion program that utilized a standard Little Dog robot platform
and prepared terrain test boards with known geometric data. While path planing using geometric information
is necessary, acquiring and utilizing tractive and compressive terrain characteristics is equally important. This
paper describes methods and results for learning tractive and compressive terrain characteristics with the Little
Dog robot. The estimation of terrain traction and compressive/support capabilities using the mechanisms and
movements of the robot rather than dedicated instruments is the goal of this research. The resulting characteristics
may differ from those of standard tests, however they will be directly usable to the locomotion controllers
given that they are obtained in the physical context of the actual robot and its actual movements. This paper
elaborates on the methods used and presents results. Future work will develop better suited probabilistic models
and interwave these methods with other purposeful actions of the robot to lessen the need for direct terrain
probing actions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently deployed small UGVs operate at speeds up to around 6 mph and have proven their usefulness in explosives
ordnance disposal (EOD) missions. As part of the TARDEC-funded Stingray Project, iRobot is investigating techniques
to increase the speed of small UGVs so they can be useful in a wider range of missions, such as high-speed
reconnaissance and infantry assault missions. We have developed a prototype Stingray PackBot, using wheels rather
than tracks, that is capable of traveling at speeds up to 18 mph. A key issue when traveling at such speeds is how to
maintain stability during sharp turns and over rough terrain. We are developing driver assist behaviors that will provide
dynamic stability control for high-speed small UGVs using techniques such as dynamic weight shifting to limit oversteer
and understeer. These driver assist behaviors will enable operators to use future high-speed small UGVs in high
optempo infantry missions and keep warfighters out of harm's way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a real-time motion estimation module for ground vehicles based on the fusion of monocular visual
odometry and low-cost inertial measurement unit data. The system features a novel algorithmic scheme enabling
accurate and robust scale estimation and odometry at high speeds. Results of multiple performance characterization
experiments (on rough terrain at speeds up to 20 mph and smooth roadways at speeds of up to 75 mph) are presented.
The prototype system demonstrates high levels of precision (relative distance error less than 1%, and less than 0.5% on
paved roads, yaw drift rate ~2 degrees per km) in multiple configurations, including various optics and vehicles.
Performance limitations, including those specific to monocular vision, are analyzed and directions for further
improvements are outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Hovercraft is an amphibious vehicle that hovers just above the ground or water by air
cushion. The concept of air cushion vehicle can be traced back to 1719. However, the
practical form of hovercraft nowadays is traced back to 1955. The objective of the paper is to
design, simulate and implement an autonomous model of a small hovercraft equipped with a
mine detector that can travel over any terrains. A real time layered fuzzy navigator for a
hovercraft in a dynamic environment is proposed. The system consists of a Takagi-Sugenotype
fuzzy motion planner and a modified proportional navigation based fuzzy controller.
The system philosophy is inspired by human routing when moving between obstacles based
on visual information including the right and left views from which he makes his next step
towards the goal in the free space. It intelligently combines two behaviours to cope with
obstacle avoidance as well as approaching a goal using a proportional navigation path
accounting for hovercraft kinematics. MATLAB/Simulink software tool is used to design and
verify the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, the 3000+ robotic systems fielded in theater are entirely teleoperated. This constant dependence on operator
control introduces several problems, including a large cognitive load on the operator and a limited ability for the operator
to maintain an appropriate level of situational awareness of his surroundings. One solution to reduce the dependence on
teleoperation is to develop autonomous behaviors for the robot to reduce the strain on the operator.
We consider mapping and navigation to be fundamental to the development of useful field autonomy for small
unmanned ground vehicles (SUGVs). To this end, we have developed baseline autonomous capabilities for our SUGV
platforms, making use of the open-source Robot Operating System (ROS) software from Willow Garage, Inc. Their
implementations of mapping and navigation are drawn from the most successful published academic algorithms in
robotics.
In this paper, we describe how we bridged our previous work with the Packbot Explorer to incorporate a new processing
payload, new sensors, and the ROS system configured to perform the high-level autonomy tasks of mapping and
waypoint navigation. We document our most successful parameter selection for the ROS navigation software in an
indoor environment and present results of a mapping experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to minimize the energy consumption of autonomous ground vehicles (AGVs) deployed in real world
missions. One of the ways that this can be accomplished is to choose the vehicle's motion to minimize the mechanical
and electrical energy usage required by the vehicle's motion. This paper considers energy efficient motion planning for
skid-steered AGVs, an important and large class of all-terrain vehicles. An experimentally verified power consumption
model for skid-steered vehicles has been recently developed based on the "exponential friction model," which yields
power consumption predictions that are far more accurate than those obtained using Coulomb's friction model. At a
given velocity the power consumption is essentially a function of the vehicle turning radius. This paper demonstrates
energy efficient motion planning using Sampling Based Model Predictive Optimization (SBMPO), a recently developed
motion planning algorithm. In this research SBMPO uses a simple kinematic model of the vehicle to determine feasible
vehicle paths and the skid-steered vehicle power model to compute the energy consumption (i.e., the cost) along a given
trajectory. The results here are for a vehicle moving on a single surface at constant velocity. Energy optimal motion
planning is compared with distance optimal motion planning and the results demonstrate the importance of considering
energy consumption in the motion planning process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army's desire for increased standoff distances between Soldiers and disguised explosive threats has yielded a
complex new technical challenge: augment existing small military robots with state-of-the-art detection and neutralization
technology. The magnitude of the challenge is increased by the need for reliable autonomy that allows the robot to
operate in different environments (e.g., complex and urban terrains, confined areas, and underground locations). This
paper describes lessons learned during efforts in 2008-09 to identify and remediate risks of developing a countermine
robot system. It also addresses issues that need attention to achieve total mission success. The work studied three phases
of a robotic countermine system: move to a threat area, investigate that area with sensor(s), and neutralize detected
threats. Each of these phases is essential, yet attention tends to focus on the third one. The focus of this paper is on risks
and lessons pertaining to the first two. What was learned about moving a countermine robot to the area of expected
threats? What is necessary for a robot to maneuver sensors and have the maximum probability of detection (Pd) of
hazards while minimizing the false alarm rate (FAR)? This paper presents observations during demonstration and test
events over the past 2 years. From those observations, lessons learned are summarized as a foundation for realizing a
countermine robot and a path forward.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a rigorous treatment of coalition formation based on trust interactions in multi-agent systems. Current
literature on trust in multi-agent systems primarily deals with trust models and protocols of interaction in noncooperative
scenarios. Here, we use cooperative game theory as the underlying mathematical framework to study the
trust dynamics between agents as a result of their trust synergy and trust liability in cooperative coalitions. We rigorously
justify the behaviors of agents for different classes of games, and discuss ways to exploit the formal properties of these
games for specific applications, such as unmanned cooperative control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of surveilling moving targets using mobile sensor agents (MSAs) is applicable to a variety of
fields, including environmental monitoring, security, and manufacturing. Several authors have shown that the
performance of a mobile sensor can be greatly improved by planning its motion and control strategies based
on its sensing objectives. This paper presents an information potential approach for computing the MSAs'
motion plans and control inputs based on the feedback from a modified particle filter used for tracking moving
targets. The modified particle filter, as presented in this paper implements a new sampling method (based
on supporting intervals of density functions), which accounts for the latest sensor measurements and adapts,
accordingly, a mixture representation of the probability density functions (PDFs) for the target motion. It is
assumed that the target motion can be modeled as a semi-Markov jump process, and that the PDFs of the
Markov parameters can be updated based on real-time sensor measurements by a centralized processing unit
or MSAs supervisor. Subsequently, the MSAs supervisor computes an information potential function that is
communicated to the sensors, and used to determine their individual feedback control inputs, such that sensors
with bounded field-of-view (FOV) can follow and surveil the target over time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Increased use of Miniature (Unmanned) Aerial Vehicles (MAVs) is coincidentally accompanied by a notable lack
of sensors suitable for enabling further increases in levels of autonomy and consequently, integration into the
National Airspace System (NAS). The majority of available sensors suitable for MAV integration are based on
infrared detectors, focal plane arrays, optical and ultrasonic rangefinders, etc. These sensors are generally not
able to detect or identify other MAV-sized targets and, when detection is possible, considerable computational
power is typically required for successful identification. Furthermore, performance of visual-range optical sensor
systems can suffer greatly when operating in the conditions that are typically encountered during search and
rescue, surveillance, combat, and most common MAV applications. However, the addition of a miniature radar
system can, in consort with other sensors, provide comprehensive target detection and identification capabilities
for MAVs. This trend is observed in manned aviation where radar systems are the primary detection and
identification sensor system. Within this document a miniature, lightweight X-Band radar system for use on a
miniature (710mm rotor diameter) rotorcraft is described. We present analyses of the performance of the system
in a realistic scenario with two MAVs. Additionally, an analysis of MAV navigation and collision avoidance
behaviors is performed to determine the effect of integrating radar systems into MAV-class vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To support the missions and tasks of mixed robotic/human teams, future robotic systems will need to adapt to the
dynamic behavior of both teammates and opponents. One of the basic elements of this adaptation is the ability to exploit
both long and short-term temporal data. This adaptation allows robotic systems to predict/anticipate, as well as
influence, future behavior for both opponents and teammates and will afford the system the ability to adjust its own
behavior in order to optimize its ability to achieve the mission goals.
This work is a preliminary step in the effort to develop online entity behavior models through a combination of learning
techniques and observations. As knowledge is extracted from the system through sensor and temporal feedback, agents
within the multi-agent system attempt to develop and exploit a basic movement model of an opponent. For the purpose
of this work, extraction and exploitation is performed through the use of a discretized two-dimensional game. The game
consists of a predetermined number of sentries attempting to keep an unknown intruder agent from penetrating their
territory. The sentries utilize temporal data coupled with past opponent observations to hypothesize the probable
locations of the opponent and thus optimize their guarding locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopter UAVs can be extensively used for military missions as well as in civil operations, ranging from multirole
combat support and search and rescue, to border surveillance and forest fire monitoring. Helicopter UAVs
are underactuated nonlinear mechanical systems with correspondingly challenging controller designs. This paper
presents an optimal controller design for the regulation and vertical tracking of an underactuated helicopter
using an adaptive critic neural network framework. The online approximator-based controller learns the infinite-horizon
continuous-time Hamilton-Jacobi-Bellman (HJB) equation and then calculates the corresponding optimal
control input that minimizes the HJB equation forward-in-time. In the proposed technique, optimal regulation
and vertical tracking is accomplished by a single neural network (NN) with a second NN necessary for the virtual
controller. Both of the NNs are tuned online using novel weight update laws. Simulation results are included to
demonstrate the effectiveness of the proposed control design in hovering applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Physics-based simulations of autonomous unmanned ground vehicles (UGV) present unique challenges and advantages
compared to real-time simulations with lower-fidelity models. We have created a high-fidelity simulation
environment, called the Virtual Autonomous Navigation Environment (VANE), to perform physics-based simulations
of UGV. To highlight the capabilities of the VANE, we recently completed a simulation of a robot
performing a reconnaissance mission in a typical Middle Eastern town. The result of the experiment demonstrated
the need for physics-based simulation for certain circumstances such as LADAR returns from razor wire
and GPS dropout and dilution of precision in urban canyons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There have been large gains in the field of robotics, both in hardware sophistication and technical capabilities.
However, as more capable robots have been developed and introduced to battlefield environments, the problem of
interfacing with human controllers has proven to be challenging. Particularly in the field of military applications,
controller requirements can be stringent and can range from size and power consumption, to durability and cost.
Traditional operator control units (OCUs) tend to resemble laptop personal computers (PCs), as these devices are
mobile and have ample computing power. However, laptop PCs are bulky and have greater power requirements.
To approach this problem, a light weight, inexpensive controller was created based on a mobile phone running the
Android operating system. It was designed to control an iRobot Packbot through the Army Research Laboratory
(ARL) in-house Agile Computing Infrastructure (ACI). The hardware capabilities of the mobile phone, such as Wi-
Fi communications, touch screen interface, and the flexibility of the Android operating system, made it a compelling
platform. The Android based OCU offers a more portable package and can be easily carried by a soldier along with
normal gear requirements. In addition, the one hand operation of the Android OCU allows for the Soldier to keep an
unoccupied hand for greater flexibility.
To validate the Android OCU as a capable controller, experimental data was collected evaluating use of the
controller and a traditional, tablet PC based OCU. Initial analysis suggests that the Android OCU performed
positively in qualitative data collected from participants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The functional software components of an autonomous robotic system express behavior via commands to its actuators,
based on processed inputs from its sensors; we propose an additional set of "cognitive" capabilities for robotic systems
of all types, based on the comprehensive logging of all available data, including sensor inputs, behavioral states, and
outputs sent to actuators. A robot should maintain a "sense" of its own (piecewise) continuous existence through time
and space; it should in some sense "get a life," providing a level of self-awareness and self-knowledge. Self-awareness
includes the ability to survive and work through unexpected power glitches while executing a task or mission. Selfknowledge
includes an extensive world model including a model of self and the purpose context in which it is operating
(deontics). Our system must support proactive self-test, monitoring, and calibration, and maintain a "personal"
health/repair history, supporting system test and evaluation by continuously measuring performance throughout the
entire product lifecycle. It will include episodic memory, and a system "lifelog," and will also participate in multiple
modes of Human Robotic interaction (HRI).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on robotic technologies and operational capabilities of multiscale robots that demonstrate a
unique class of Microsystems with the ability to navigate diverse terrains and environments. We introduce two
classes of robots which combine multiple locomotion modalities including centimeter scale Discrete and Continuous
robots which are referred here by D-Starbot and C-Starbot, respectively. The first generation of the robots were
obtained to allow rapid shape reconfiguration and flipping recovery to accomplish tasks such as lowering and raising
to dexterously go over and under obstacles, deform to roll over hostile location as well as squeezing through opening
smaller than its sizes. The D-Starbot is based on novel mechanisms that allow shape reconfiguration to accomplish
tasks such as lowering and raising to go over and under obstacles as well as squeezing through small voids. The CStarbot
is a new class of foldable robots that is generally designed to provide a high degree of manufacturability. It
consists of flexible structures that are built out of composite laminates with embedded microsystems. The design
concept of C-Starbot are suitable for robots that could emulate and combine multiple locomotion modalities such as
walking, running, crawling, gliding, clinging, climbing, flipping and jumping. The first generation of C-Starbot has
centimeter scale structure consisting of flexible flaps, each being coupled with muscle-like mechanism. Untethered
D-Starbot designs are prototyped and tested for multifunctional locomotion capabilities in indoor and outdoor
environments. We present foldable mechanism and initial prototypes of C-Starbot capable of hopping and squeezing
at different environments. The kinematic performance of flexible robots is thoroughly presented using the large
elastic deflection of a single arm which is actuated by pulling force acting at variable angles and under payload and
friction forces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Animal behavioral, physiological and neurobiological studies are providing a wealth of inspirational data for robot
design and control. Several very different biologically inspired mobile robots will be reviewed. A robot called DIGbot is
being developed that moves independent of the direction of gravity using Distributed Inward Gripping (DIG) as a rapid
and robust attachment mechanism observed in climbing animals. DIGbot is an 18 degree of freedom hexapod with
onboard power and control systems. Passive compliance in its feet, which is inspired by the flexible tarsus of the
cockroach, increases the robustness of the adhesion strategy and enables DIGbot to execute large steps and stationary
turns while walking on mesh screens. A Whegs™ robot, inspired by insect locomotion principles, is being developed that
can be rapidly reconfigured between tracks and wheel-legs and carry GeoSystems Zipper Mast. The mechanisms that
cause it to passively change its gait on irregular terrain have been integrated into its hubs for a compact and modular
design. The robot is designed to move smoothly on moderately rugged terrain using its tracks and run on irregular terrain
and stairs using its wheel-legs. We are also developing soft bodied robots that use peristalsis, the same method of
locomotion earthworms use. We present a technique of using a braided mesh exterior to produce fluid waves of motion
along the body of the robot that increase the robot's speed relative to previous designs. The concept is highly scalable,
for endoscopes to water, oil or gas line inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this project, we further developed and tested a "ZipperMast" for small robots and legacy manned vehicles. The
ZipperMast knits three coiled bands of spring steel together to form a rigid mast. As the mast is extended, it draws up a
cable connecting the host platform to the payload, typically antennas and sensors. Elevating the payload improves line of
sight, and thus improves radio communication and surveillance situation awareness. When the mast is retracted, the
interior cable slides into a horizontal tray. The ZipperMast is a scaleable design. We have made systems that elevate to
8 and 20 feet. The 8 foot ZipperMast collapses to less that 8 inches high and 8 inches wide. The 20 foot ZipperMast
collapses to less that 12 inches high and 18 inches wide. In this paper we report on tests of the mechanical properties of
the mast, specifically the strength and stiffness under quasi-static and impulsive loading. These properties are important
for specifying constraints on height as a function of speed and payload and on speed as a function of height and payload
in order to ensure that the mast will not fail in the event of sudden stop, as in the event of a collision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper outlines research experiments performed on quantative evaluation of 3D geospatial data obtained by
means of Photogrammetric Small UAV(PSUAV) developed at Michigan Tech. SUAV platform is equipped with
autopilot and capable to accommodate a payload up to 11 pounds. Experiments were performed deploying 12MP
Cannon Rebel EOS camera, which was a subject of calibration procedures. Surveying grade GPS equipment was
used to prepare ground calibration sites. Work on processing of the obtained datasets encompasses: sensor modeling,
single photo resections with image co-registration, mosaicking, and finally 3D terrain models generation. One of the
most important results achieved at current stage of PSUAV development is method and algorithms for comparison
of UAV obtained DEMs with another models obtained from different geospatial sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser power beaming - transmitting electric power without wires via laser - has been demonstrated for kilowatt power
levels and kilometer distances. This paper reviews the demonstrated and projected capabilities and limitations of laser
power beaming, and analyzes the requirements for several application areas relevant to defense and security: unmanned
aerial vehicles (UAVs), communications relays, sensor networks, and field unit or forward base power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Holy Grail of autonomous ground robotics has been to make ground vehicles that behave like humans. Over the
years, as a community, we have realized the difficulty of this task, and we have back pedaled from the initial Holy Grail
and have constrained and narrowed the domains of operation in order to get robotic systems fielded. This has lead to
phrases such as "operation in structured environments" and "open-and-rolling terrain" in the context of autonomous
robot navigation. Unfortunately, constraining the problem in this way has only put off the inevitable, i.e., solving the
myriad of difficult robotics problems that we identified as long ago as the 1980's on the Autonomous Land Vehicle
Project and in most cases are still facing today. These "Tall Poles" have included but are not limited to navigation
through complex terrain geometry, navigation through thick vegetation, the detection of geometry-less obstacles such as
negative obstacles and thin obstacles, the ability to deal with diverse and dynamic environmental conditions, the ability
to function in dynamic and cluttered environments alongside other humans, and any combination of the above. This
paper is an overview of the progress we have made at Autonomous Systems over the last three years in trying to knock
down some of the tall poles remaining in the field of autonomous ground robotics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.