PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8408, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate insider threat detection for determining when the behavior of a computer user is suspicious or different
from his or her normal behavior. This is accomplished by combining features extracted from text, emails, and blogs that
are associated with the user. These sources can be characterized using QUEST, DANCER, and MenTat to extract
features; however, some of these features are still in text form. We show how to convert these features into numerical
form and characterize them using parametric and non-parametric statistics. These features are then used as input into a
Random Forest classifier that is trained to recognize whenever the user's behavior is suspicious or different from normal
(off-nominal). Active authentication (user identification) is also demonstrated using the features and classifiers derived
in this work. We also introduce a novel concept for remotely monitoring user behavior indicator patterns displayed as an
infrared overlay on the computer monitor, which the user is unaware of, but a narrow pass-band filtered webcam can
clearly distinguish. The results of our analysis are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of cyber threats has created a need for a new network planning, design, architecture, operations, control,
situational awareness, management, and maintenance paradigms. Primary considerations include the ability to assess
cyber attack resiliency of the network, and rapidly detect, isolate, and operate during deliberate simultaneous attacks
against the network nodes and links. Legacy network planning relied on automatic protection of a network in the event
of a single fault or a very few simultaneous faults in mesh networks, but in the future it must be augmented to include
improved network resiliency and vulnerability awareness to cyber attacks. Ability to design a resilient network requires
the development of methods to define, and quantify the network resiliency to attacks, and to be able to develop new
optimization strategies for maintaining operations in the midst of these newly emerging cyber threats. Ways to quantify
resiliency, and its use in visualizing cyber vulnerability awareness and in identifying node or link criticality, are
presented in the current work, as well as a methodology of differential network hardening based on the criticality profile
of cyber network components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most vexing challenges of working with graphical structures is that most algorithms scale poorly
as the graph becomes very large. The computation is extremely expensive even for polynomial algorithms,
thus making it desirable to devise fast approximation algorithms. We herein propose a framework using
advanced tools 1-6 from random graph theory and spectral graph theory to address the quantitative analysis of the structure and dynamics of large-scale networks. This framework enables one to carry out analytic
computations of observable network structures and capture the most relevant and refined quantities of realworld
networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a framework of the application of Principal Component Analysis (PCA) to automatically obtain meaningful
metrics from intrusion detection measurements. In particular, we report the progress made in applying PCA to analyze
the behavioral measurements of malware and provide some preliminary results in selecting dominant attributes from an
arbitrary number of malware attributes. The results will be useful in formulating an optimal detection threshold in the
principal component space, which can both validate and augment existing malware classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many threats in the real world can be related to activity of persons on the internet. Internet surveillance aims to predict
and prevent attacks and to assist in finding suspects based on information from the web. However, the amount of data on
the internet rapidly increases and it is time consuming to monitor many websites. In this paper, we present a novel
method to automatically monitor trends and find anomalies on the internet. The system was tested on Twitter data. The
results showed that it can successfully recognize abnormal changes in activity or emotion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As attackers get more coordinated and advanced in cyber attacks, cyber assets are required to
have much more resilience, control effectiveness, and collaboration in networks. Such a requirement makes it essential to take a comprehensive and objective approach for measuring the individual and
relative performances of cyber security assets in network nodes. To this end, this paper presents four techniques as to how the relative importance of cyber assets can be measured more comprehensively and objectively by considering together the main variables of risk assessment (e.g., threats, vulnerabilities),
multiple attributes (e.g., resilience, control, and influence), network connectivity and controllability among collaborative cyber assets in networks. In the first technique, a Bayesian network is used to
include the random variables for control, recovery, and resilience attributes of nodes, in addition to the random variables of threats, vulnerabilities, and risk. The second technique shows how graph matching and coloring can be utilized to form collaborative pairs of nodes to shield together against threats and vulnerabilities. The third technique ranks the security assets of nodes by incorporating multiple weights
and thresholds of attributes into a decision-making algorithm. In the fourth technique, the hierarchically well-separated tree is enhanced to first identify critical nodes of a network with respect to their attributes
and network connectivity, and then selecting some nodes as driver nodes for network controllability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Attacks aim at exploiting vulnerabilities of a program to gain control over its execution. By
analyzing the program semantics, relational integrity, and execution paths, this paper presents a relationalintegrity
approach to enhance the effectiveness of intrusion detection and prevention systems for
malicious program traits. The basic idea is to first identify the main relational properties of program
statements with respect to variables and operations like load and store and, then, to decide which relations
could be checked through program statements or the guards inserted at the vulnerable points of program.
These relational statements are represented by ordered binary decisions diagrams that are constructed for
the entire program as well as the overlapping code partitions. When a host-based intrusion detection
system monitors the execution of a program by checking the system calls of a process or the function calls
of a driver, it may generate alerts for potential exploits. This paper also addresses data aggregation of
alerts by considering their attributes and various probability distribution functions, where the Dempster's
rule of combination is extended to aggregate data for dependent evidences as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Social media networks make up a large percentage of the content available on the Internet and most of
the time users spend online today is in interacting with them. All of the seemingly small pieces of
information added by billions of people result in a enormous rapidly changing dataset. Searching,
correlating, and understanding billions of individual posts is a significant technical problem; even the
data from a single site such as Twitter can be difficult to manage. In this paper, we present Coalmine a
social network data-mining system. We describe the overall architecture of Coalmine including the
capture, storage and search components. We also describe our experience with pulling 150-350 GB of
Twitter data per day through their REST API. Specifically, we discuss our experience with the
evolution of the Twitter data APIs from 2011 to 2012 and present strategies for maximizing the amount
of data collected. Finally, we describe our experiences looking for evidence of botnet command and
control channels and examining patterns of SPAM in the Twitter dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phishing website analysis is largely still a time-consuming manual process of discovering potential
phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs
to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing
phishing sites up and down rapidly at new locations, making automated response essential. In this
paper, we present a method for rapid, automated detection and analysis of phishing websites. Our
method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch
the pages pointed to by each URL and characterize each page with a set of easily computed values
such as number of images and links. We also capture a screen-shot of the rendered page image,
compute a hash of the image and use the Hamming distance between these image hashes as a form of
visual comparison. We provide initial results demonstrate the feasibility of our techniques by
comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing
a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some
initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We
discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted
on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for
future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Network defense has more technologies available for purchase today than ever before. As the number of threats increase,
organizations are deploying multiple defense technologies to defend their networks. For instance, an enterprise network
boundary often implements multiple network defense appliances, some with overlapping capabilities (e.g., firewalls,
IDS/IPS, DNS Defense). These appliances are applied in a serial fashion to create a chain of network processing
specifically designed to drop bad traffic from the network. In these architectures, once a packet is dropped by an
appliance subsequent appliances do not process it. This introduces significant limitations; (1) Stateful appliances will
maintain an internal state which differs from network reality; (2) The network manager cannot determine, or unit test,
how each appliance would have treated each packet; (3) The appliance "votes" cannot be combined to achieve higherlevel
functionality. To address these limitations, we have developed a novel, backwards-compatible Parallel Architecture
for Network Defense Appliances (PANDA). Our approach allows every appliance to process all network traffic and cast
a vote to drop or allow each packet. This "crowd-sourcing" approach allows the network designer to take full advantage
of each appliance, understand how each appliance is behaving, and achieve new collaborative appliance behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Joint Directors of Labs Data Fusion Process Model (JDL Model) provides a framework for how to handle sensor
data to develop higher levels of inference in a complex environment. Beginning from a call to leverage data fusion
techniques in intrusion detection, there have been a number of advances in the use of data fusion algorithms in this subdomain
of cyber security. While it is tempting to jump directly to situation-level or threat-level refinement (levels 2 and
3) for more exciting inferences, a proper fusion process starts with lower levels of fusion in order to provide a basis for
the higher fusion levels. The process begins with first order entity extraction, or the identification of important entities
represented in the sensor data stream. Current cyber security operational tools and their associated data are explored for
potential exploitation, identifying the first order entities that exist in the data and the properties of these entities that are
described by the data. Cyber events that are represented in the data stream are added to the first order entities as their
properties. This work explores typical cyber security data and the inferences that can be made at the lower fusion levels
(0 and 1) with simple metrics. Depending on the types of events that are expected by the analyst, these relatively simple
metrics can provide insight on their own, or could be used in fusion algorithms as a basis for higher levels of inference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A key challenge for human cybersecurity operators is to develop an understanding of what is happening within, and
to, their network. This understanding, or situation awareness, provides the cognitive basis for human operators to take
action within their environments. Yet developing situation awareness of cyberspace (cyber-SA) is understood to be
extremely difficult given the scope of the operating environment, the highly dynamic nature of the environment and the
absence of physical constraints that serve to bound the cognitive task23. As a result, human cybersecurity operators are
often "flying blind" regarding understanding the source, nature, and likely impact of malicious activity on their
networked assets. In recent years, many scholars have dedicated their attention to finding ways to improve cyber-SA in
human operators. In this paper we present our findings from our ongoing research of how cybersecurity analysts develop
and maintain cyber-SA. Drawing from over twenty interviews of analysts working in the military, government,
industrial, and educational domains, we find that cyber-SA to be distributed across human operators and technological
artifacts operating in different functional areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While cyberspace is emerging as a new battlefield, conventional Electronic Warfare (EW) methods and applications are
likely to change. Cyber Electronic Warfare (CEW) concept which merges cyberspace capabilities with traditional EW
methods, is a new and enhanced form of the electronic attack.
In this study, cyberspace domain of the battlefield is emphazised and the feasibility of integrating Cyber Warfare (CW)
concept into EW measures is researched. The SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis
method is used to state the operational advantages of using CEW concept in the battlefield. The operational advantages
of CEW are assessed by means of its effects on adversary air defense systems, communication networks and information
systems. Outstanding technological and operational difficulties are pointed out as well. As a result, a comparison of
CEW concept and conventional EW applications is presented.
It is concluded that, utilization of CEW concept is feasible at the battlefield and it may yield important operational
advantages. Even though the computers of developed military systems are less complex than normal computers, they are
not subjected to cyber threats since they are closed systems. This concept intends to show that these closed systems are
also open to the cyber threats. As a result of the SWOT analysis, CEW concept provides Air Forces to be used in cyber
operations effectively. On the other hand, since its Collateral Damage Criteria (CDC) is low, the usage of cyber
electronic attack systems seems to grow up.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SkyServer is an Internet portal to data from the Sloan Digital Sky Survey, the largest online archive of astronomy
data in the world. provides free access to hundreds of millions of celestial objects for science, education and
outreach purposes. Logs of accesses to SkyServer comprise around 930 million hits, 140 million web services
accesses and 170 million SQL submitted queries, collected over the past 10 years. These logs also contain
indications of compromise attempts on the servers. In this paper, we show some threats that were detected in
ten years of stored logs, and compare them with known threats in those years. Also, we present an analysis of
the evolution of those threats over these years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Network intrusions leverage vulnerable hosts as stepping stones to penetrate deeper into a network and mask malicious actions from detection. Identifying stepping stones presents a significant challenge because network sessions appear as legitimate traffic. This research focuses on a novel active watermark technique using discrete wavelet transformations to mark and detect interactive network sessions. This technique is scalable, resilient to network noise, and difficult for attackers to discern that it is in use. Previously captured timestamps from the CAIDA 2009 dataset are sent using live stepping stones in the Amazon Elastic Compute Cloud service. The client system sends watermarked and unmarked packets from California to Virginia using stepping stones in Tokyo, Ireland and Oregon. Five trials are conducted in which the system sends simultaneous watermarked samples and unmarked samples to each target. The live experiment results demonstrate approximately 5% False Positive and 5% False Negative detection rates. Additionally, watermark extraction rates of approximately 92% are identified for a single stepping stone. The live experiment results demonstrate the effectiveness of discerning watermark traffic as applied to identifying stepping stones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe an approach for the detection and classication of weak, distributed patterns in sensor
networks. Of course, before one can begin development of a pattern detection algorithm, one must rst dene the
term "pattern", which by nature is a broad and inclusive term. One of the key aspects of our work is a denition
of pattern that has already proven eective in detecting anomalies in real world data. While designing detection
algorithms for all classes of patterns in all types of networks sounds appealing, this approach would almost
certainly require heuristic methods and only cursory statements of performance. Rather, we have specically
studied the problem of intrusion detection in computer networks in which a pattern is an abnormal or unexpected
spatio-temporal dependence in the data collected across the nodes. We do not attempt to match an a priori
template, but instead have developed algorithms that allow the pattern to reveal itself in the data by way of
dependence or independence of observed time series. Although the problem is complex and challenging, recent
advances in ℓ1 techniques for robust matrix completion, compressed sensing, and correlation detection provide
promising opportunities for progress. Our key contribution to this body of work is the development of methods
that make an accounting of uncertainty in the measurements on which the inferences are based. The performance
of our methods will be demonstrated on real world data, including measured data from the Abilene Internet2
network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current efforts aimed at detecting and identifying Near Earth Objects (NEOs) that pose potential risks to Earth use
moderately sized telescopes combined with image processing algorithms to detect the motion of these objects. The
search strategies of such systems involve multiple revisits at given intervals between observations to the same area of the
sky so that objects that appear to move between the observations can be identified against the static star field. Dynamic
Logic algorithm, derived from Modeling Field Theory, has made significant improvements in detection, tracking, and
fusion of ground radar images. As an extension to this, the research in this paper will examine Dynamic Logic's ability
to detect NEOs with minimal human-in-the-loop intervention. Although the research in this paper uses asteroids for the
automation detection, the ultimate extension to this study is for detecting orbital debris. Many asteroid orbits are well
defined, so they will serve as excellent test cases for our new algorithm application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a need to model complementary aspects of various data channels in distributed sensor networks in order to
provide efficient tools of decision support in rapidly changing, dynamic real life scenarios. Our aim is to develop an
autonomous cyber-sensing system that supports decision support based on the integration of information from diverse
sensory channels. Target scenarios include dismounts performing various peaceful and/or potentially malicious
activities. The studied test bed includes Ku band high bandwidth radar for high resolution range data and K band low
bandwidth radar for high Doppler resolution data. We embed the physical sensor network in cyber network domain to
achieve robust and resilient operation in adversary conditions. We demonstrate the operation of the integrated sensor
system using artificial neural networks for the classification of human activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor systems such as distributed sensor networks and radar systems are potentially agile - they have parameters that
can be adjusted in real-time to improve the quality of data obtained for state-estimation and decision-making. The
integration of such sensors with cyber systems involving many users or agents permits greater flexibility in choosing
measurement actions. This paper considers the problem of selecting radar waveforms to minimize uncertainty about the
state of a tracked target. Past work gave a tractable method for optimizing the choice of measurements when an accurate
dynamical model is available. However, prior knowledge about a system is often not precise, for example, if the target
under observation is an adversary. A multiple agent system is proposed to solve the problem in the case of uncertain
target dynamics. Each agent has a different target model and the agents compete to explain past data and select the
parameters of future measurements. Collaboration or competition between these agents determines which obtains access
to the limited physical sensing resources. This interaction produces a self-aware sensor that adapts to changing
information requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we apply Dynamic Logic algorithm (DL) directly to analyze AFRL Gotcha raw data, which are sampled in
terms of chirp frequencies for each pulse. This approach has not been done in the existing literature because previously,
there is no visual or conceptual interpretation how to detect ground moving targets directly from synthetic aperture radar
(SAR) raw data. Using Gotcha raw data taken from the severe radar clutter environment, we have demonstrated that DL
not only maintains its robustness in terms of detecting small ground moving targets but also provides the effectiveness to
extract specific features such as shape, size, location (along the range swath), movement direction, and traveling speed
for the desired target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main goal of a cyberspace environment is to support decision makers with relevant information on time for
operational use. Cyberspace environments depend on geospatial data including terrestrial, aerial/UAV, satellite and
other multi-sensor data obtained in electro-optical and other imaging domains. Despite advances in automated
geospatial image processing, the "human in the loop" is still necessary because current applications depend upon
complex algorithms and adequate classification rules that can only be provided by skilled geospatial professionals.
Signals extracted from humans may become an element of a cyberspace system. This paper describes research experiments on integrating an EEG device within geospatial technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser communication systems operate in the presence of strong atmospheric turbulence, affecting communication
platform by broadening of the laser footprint, random jitter of the laser beam, and high spatial frequency
intensity fluctuations referred to as scintillation. The prediction of the effects induced by the atmospheric
turbulence is a crucial task for reliable data transmission. Equipping the lasercom platform with adaptive optics
system capable of probing the atmospheric turbulence and generating the data on wave front errors in real
time improves performance and extends the range of optical communications systems. Most adaptive optics
systems implement wavefront sensors to measure the errors induced by the atmospheric turbulence. Real time
analysis of the data received from the wavefront sensor is used for outgoing laser beam compensation significantly
improves the lasercom performance. To obtain reliable data, the wavefront sensor needs to be accurately aligned
and calibrated. To model the performance of a laser communication system operating in the real world we have
developed an outdoor 3.2 km, partially over water, turbulence measurement and monitoring communication link.
The developed techniques of wavefront sensor alignment and calibration led to the successful data collection and
analysis are discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast, efficient distributed computing enables much more capable sensing systems for defense and commercial
applications. However, distributed systems face distributed threats. These threats can be countered with a distributed
trustworthiness architecture that measures and enforces trust across a network. Currently, there are no designs for
distributed trust architectures suitable for complex systems. We present such an architecture, which measures nodes'
trustworthiness before they join the network and while they are operational. In order to facilitate the computation and
enforcement of trust, a distributed sensing network has to integrate a new type of component. We define the trust agents
in terms of capabilities that support trustworthiness measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the area of covert network communications, the focus has been on spread spectrum (SS) techniques using correlated
host data, applicable to many data hiding and covert communications applications. Our work relates to the Iterative
Generalized Least Squares (IGLS) blind signature recovery algorithm of Gkizeli et al. 1, and can be summarized as
follows: (1) We have performed extensive Monte Carlo simulations that characterize the convergence properties of the
algorithm as a function of signature length, host distortion, and number of hidden bits; (2) We have developed and
characterized the behavior of a fully blind extension of the IGLS algorithm, called the BC-IGLS (Blind IGLS using
Clustering); (3) We have developed and performed a characterization study of an extension to the IGLS algorithm, called
the MS-IGLS (Multi Signature IGLS), that performs blind extraction of multiple signatures in multi-user embedding
applications2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examine the hypothesis that the decision boundary between malware and non-malware is fractal. We introduce a
novel encoding method derived from text mining for converting disassembled programs first into opstrings and then
filter these into a reduced opcode alphabet. These opcodes are enumerated and encoded into real floating point number
format and used for characterizing frequency of occurrence and distribution properties of malware functions to compare
with non-malware functions. We use the concept of invariant moments to characterize the highly non-Gaussian structure
of the opcode distributions. We then derive Data Model based classifiers from identified features and interpolate and
extrapolate the parameter sample space for the derived Data Models. This is done to examine the nature of the parameter
space classification boundary between families of malware and the general non-malware category. Preliminary results
strongly support the fractal boundary hypothesis, and a summary of our methods and results are presented here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.