Collaborative multi-sensor perception enables a sensor network to provide multiple views or observations of an environment, in a way that collects multiple observations into a cohesive display. In order to do this, multiple observations must be intelligently fused. We briefly describe our existing approach for sensor fusion and selection, where a weighted combination of observations is used to recognize a target object. The optimal weights that are identified control the fusion of multiple sensors, while also selecting those which provide the most relevant or informative observations. In this paper, we propose a system which utilizes these optimal sensor fusion weights to control the display of observations to a human operator, providing enhanced situational awareness. Our proposed system displays observations based on the physical locations of the sensors, enabling a human operator to better understand where observations are located in the environment. Then, the optimal sensor fusion weights are used to scale the display of observations, highlighting those which are informative and making less relevant observations simple for a human operator to ignore.
Team communication is crucial in multi-domain operations (MDOs) that require teammates to collaborate on complex tasks synchronously in dynamic unknown environments. In order to enable effective communication in human-robot teams, the human teammate must have an intuitive interface that supports and satisfies the time-sensitive nature of the task for communicating information to and from their robot teammate. Augmented Reality (AR) technologies can provide just such an interface by providing a medium for both active and passive robot communication. In this paper we propose a new Virtual Reality (VR) based framework for authoring AR visualizations, and demonstrate the use of this framework to produce AR visualizations that help facilitate high task performance in synchronized, time-dominant human-robot teaming. In this paper we propose a new framework that uses a virtual reality (VR) simulation environment for developing AR strategies as well as present a AR solution for maximizing task performance in synchronized, time-dominant human-robot teaming. The framework utilizes a Unity-based VR simulator that is run from the first person point of view of the human teammate and overlays AR features to virtually imitate the use of an AR headset in human-robot teaming scenarios. Then, we introduce novel AR visualizations that support strategic communication within teams by collecting information from each teammate and presenting it to the other in order to influence their decision making. Our proposed design framework and AR solution has the potential to impact any domain in which humans conduct synchronized multi-domain operations alongside autonomous robots in austere environments, including search and rescue, environmental monitoring, and homeland defense.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.