KEYWORDS: Signal to noise ratio, Mahalanobis distance, Mendelevium, Sensors, Databases, Failure analysis, Distance measurement, Algorithm development, Data centers, Decision support systems
The Mahalanobis Taguchi System (MTS) is a relatively new tool in the vehicle health maintenance domain, but has
some distinct advantages in current multi-sensor implementations. The use of Mahalanobis Spaces (MS) allows the
algorithm to identify characteristics of sensor signals to identify behaviors in machines. MTS is extremely powerful with
the caveat that the correct variables are selected to form the MS. In this research work, 56 sensors monitor various
aspects of the vehicles. Typically, using the MTS process, identification of useful variables is preceded by validation of
the measurements scale. However, the MTS approach doesn’t directly include any mitigating steps should the
measurement scale not be validated. Existing work has performed outlier removal in construction of the MS, which can
lead to better validation. In our approach, we modify the outlier removal process with more liberal definitions of outliers
to better identify variables’ impact prior to identification of useful variables. This subtle change substantially lowered the
false positive rate due to the fact that additional variables were retained. Traditional MTS approaches identify useful
variables only to the extent they provide usefulness in identifying the positive (abnormal) condition. The impact of
removing false negatives is not included. Initial results show our approach can reduce false positive values while still
maintaining complete fault identification for this vehicle data set.
KEYWORDS: Data fusion, Sensors, Data modeling, Information fusion, Systems modeling, Data processing, Logic, Facial recognition systems, Data analysis, Algorithm development
Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information.
Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming
common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the
most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial
developments, numerous models of information fusion have emerged, hoping to better capture the human-centric
process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with
Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high
level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling,
and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial
research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level
information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The
initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and
repurposed data in a cohesive manner. FURNACE supports analyst’s efforts to develop situation models, threat models,
and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the
military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence
markets.
Analysts are faced with mountains of data, and finding that relevant piece of information is the proverbial needle in a
haystack, only with dozens of haystacks. Analysis tools that facilitate identifying causal relationships across multiple
data sets are sorely needed. 21st Century Systems, Inc. (21CSi) has initiated research called Causal-View, a causal datamining
visualization tool, to address this challenge. Causal-View is built on an agent-enabled framework. Much of the
processing that Causal-View will do is in the background. When a user requests information, Data Extraction Agents
launch to gather information. This initial search is a raw, Monte Carlo type search designed to gather everything
available that may have relevance to an individual, location, associations, and more. This data is then processed by Data-
Mining Agents. The Data-Mining Agents are driven by user supplied feature parameters. If the analyst is looking to see
if the individual frequents a known haven for insurgents he may request information on his last known locations. Or, if
the analyst is trying to see if there is a pattern in the individual's contacts, the mining agent can be instructed with the
type and relevance of the information fields to look at. The same data is extracted from the database, but the Data
Mining Agents customize the feature set to determine causal relationships the user is interested in. At this point, a
Hypothesis Generation and Data Reasoning Agents take over to form conditional hypotheses about the data and pare the
data, respectively. The newly formed information is then published to the agent communication backbone of Causal-
View to be displayed. Causal-View provides causal analysis tools to fill the gaps in the causal chain. We present here the
Causal-View concept, the initial research into data mining tools that assist in forming the causal relationships, and our
initial findings.
Current Army logistical systems and databases contain massive amounts of data that need an effective method to extract
actionable information. The databases do not contain root cause and case-based analysis needed to diagnose or predict
breakdowns. A system is needed to find data from as many sources as possible, process it in an integrated fashion, and
disseminate information products on the readiness of the fleet vehicles. 21st Century Systems, Inc. introduces the Agent-
Enabled Logistics Enterprise Intelligence System (AELEIS) tool, designed to assist logistics analysts with assessing the
availability and prognostics of assets in the logistics pipeline. AELEIS extracts data from multiple, heterogeneous data
sets. This data is then aggregated and mined for data trends. Finally, data reasoning tools and prognostics tools evaluate
the data for relevance and potential issues. Multiple types of data mining tools may be employed to extract the data and
an information reasoning capability determines what tools are needed to apply them to extract information. This can be
visualized as a push-pull system where data trends fire a reasoning engine to search for corroborating evidence and then
integrate the data into actionable information. The architecture decides on what reasoning engine to use (i.e., it may start
with a rule-based method, but, if needed, go to condition based reasoning, and even a model-based reasoning engine for
certain types of equipment). Initial results show that AELEIS is able to indicate to the user of potential fault conditions
and root-cause information mined from a database.
In order for First Responder Command and Control personnel to visualize incidents at urban building locations, DHS
sponsored a small business research program to develop a tool to visualize 3D building interiors and movement of First
Responders on site. 21st Century Systems, Inc. (21CSI), has developed a toolkit called Hierarchical Grid Referenced
Normalized Display (HiGRND). HiGRND utilizes three components to provide a full spectrum of visualization tools to
the First Responder. First, HiGRND visualizes the structure in 3D. Utilities in the 3D environment allow the user to
switch between views (2D floor plans, 3D spatial, evacuation routes, etc.) and manually edit fast changing environments.
HiGRND accepts CAD drawings and 3D digital objects and renders these in the 3D space. Second, HiGRND has a First
Responder tracker that uses the transponder signals from First Responders to locate them in the virtual space. We use the
movements of the First Responder to map the interior of structures. Finally, HiGRND can turn 2D blueprints into 3D
objects. The 3D extruder extracts walls, symbols, and text from scanned blueprints to create the 3D mesh of the building.
HiGRND increases the situational awareness of First Responders and allows them to make better, faster decisions in
critical urban situations.
KEYWORDS: Neurons, Data fusion, Databases, Logic, Software development, Information fusion, Data modeling, Data integration, Classification systems, Integration
Hybrid Intrinsic Cellular Inference Network (HICIN) is designed for battlespace decision support applications. We
developed an automatic method of generating hypotheses for an entity-attribute classifier. The capability and
effectiveness of a domain specific ontology was used to generate automatic categories for data classification.
Heterogeneous data is clustered using an Adaptive Resonance Theory (ART) inference engine on a sample (unclassified)
data set. The data set is the Lahman baseball database. The actual data is immaterial to the architecture, however,
parallels in the data can be easily drawn (i.e., "Team" maps to organization, "Runs scored/allowed" to Measure of
organization performance (positive/negative), "Payroll" to organization resources, etc.). Results show that HICIN
classifiers create known inferences from the heterogonous data. These inferences are not explicitly stated in the
ontological description of the domain and are strictly data driven. HICIN uses data uncertainty handling to reduce errors
in the classification. The uncertainty handling is based on subjective logic. The belief mass allows evidence from
multiple sources to be mathematically combined to increase or discount an assertion. In military operations the ability to
reduce uncertainty will be vital in the data fusion operation.
Given the vast amount of image intelligence utilized in support of planning and executing military
operations, a passive automated image processing capability for target identification is urgently required.
Furthermore, transmitting large image streams from remote locations would quickly use available band
width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the
problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive
Resonance Theory approach to cluster templates of target buildings. The results show that the network
successfully classifies targets from non-targets in a virtual test bed environment.
Many environments challenge human capabilities (e.g., situational stress, waiting, fatigue from long duty hours, etc.).
The capability to measure and model the individual's human performance is an important first step in determining a
person's or group's effectiveness in a particular situation. Human bias toward particular climates, favorite routines,
capabilities and limitations strongly influence overall performance. However, the mission team and relationships
amongst the team members adds a very import dimension to the performance during operations or simulations using
models of humans. This paper presents the Grid-Group Cm-α method for predicting performance considering both
environmental and cultural factors. The prediction method is based on Hooke's law which calculates the mechanical
strain on a solid object given the applied physical stresses. Grid-Group Cm-α treats the specific cultural and
environmental factors of a mission as applied stress on to the collection of individuals, the solid object. The collection of
individuals has a given set of properties based on their culture and physical capacities. The resulting strain is estimated
from these parameters and can be used to optimize group selection for mission objectives.
This work extends existing agent-based target movement prediction to include key ideas of behavioral inertia, steady
states, and catastrophic change from existing psychological, sociological, and mathematical work. Existing target
prediction work inherently assumes a single steady state for target behavior, and attempts to classify behavior based on a
single emotional state set. The enhanced, emotional-based target prediction maintains up to three distinct steady states,
or typical behaviors, based on a target's operating conditions and observed behaviors. Each steady state has an
associated behavioral inertia, similar to the standard deviation of behaviors within that state. The enhanced prediction
framework also allows steady state transitions through catastrophic change and individual steady states could be used in
an offline analysis with additional modeling efforts to better predict anticipated target reactions.
A culturally diverse group of people are now participating in military multinational coalition operations (e.g., combined
air operations center, training exercises such as Red Flag at Nellis AFB, NATO AWACS), as well as in extreme
environments. Human biases and routines, capabilities, and limitations strongly influence overall system performance;
whether during operations or simulations using models of humans. Many missions and environments challenge human
capabilities (e.g., combat stress, waiting, fatigue from long duty hours or tour of duty). This paper presents a team
selection algorithm based on an evolutionary algorithm. The main difference between this and the standard EA is that a
new form of objective function is used that incorporates the beliefs and uncertainties of the data. Preliminary results
show that this selection algorithm will be very beneficial for very large data sets with multiple constraints and
uncertainties. This algorithm will be utilized in a military unit selection tool.
To facilitate decision making tasks it is necessary to be able to "see" the situation. An enormous array of intelligence
gathering, database, and sensor sources of information are available. Methods for visualizing the information must be
established and information presented in such a way that human attention is captured and maintained on the most critical
aspects of the information. Visualizations need to adapt to the changing circumstances to show the most relevant
information at that time. We are developing a system called Holistic Analysis, Visualization, & Characterization
Assessment Tool (HAVCAT) that uses intelligent agents that interact with the user to provide the correct information at
the right time. This cutting edge system will enable visualization researchers to investigate techniques for adjusting
visualizations based on user performance HAVCAT will employ domain ontologies to determine relationships within
the data. The HAVCAT evidence reasoning agent distills the data and extracts the most pertinent actions or
consequences. This paper describes the HAVCAT concepts and also research issues related to development of
HAVCAT and techniques for directing user attention.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.