It is becoming more common for search and track algorithms to need to account for observations that can arise from both radio frequency (RF) and electro-optical infrared (EO/IR) measurements in the same scenario. Development of novel algorithms for search and track applications requires measured or synthetically generated data, and frequently only considers one or the other. Historically, the synthetic data generation process for RF and EO/IR developed independent of one another and did not share a common sense of “truth” about the environment or the objects within the simulation. This lack of a common framework with a consistent environment and platform representation between the two sensing modalities can lead to errors in the algorithm development process. For example, if the RF data assumed one set of atmospheric conditions while the EO/IR assumed a different set of conditions, the RF modality could over or under perform compared to the EO/IR. To address this issue, Georgia Tech Research Institute (GTRI) has developed General High-fidelity Omni-Spectrum Toolbox (GHOST) as a plug and play simulation architecture to generate high-fidelity EO/IR and RF synthetic data for search and track algorithm development. Additionally, because GHOST is plug and play, it can potentially provide synthetic or measured result to developmental algorithms without needing to change the algorithm’s interface. This paper presents the efforts GTRI has put into extending GHOST into the RF domain and presents sample results from search and track algorithm development. It also presents a look forward into how GHOST is being adapted to accommodate measured data alongside synthetic data for improved algorithm development.
KEYWORDS: Interpolation, Atmospheric corrections, Computer simulations, Computation time, Systems modeling, Modeling, Knowledge management, Design and modelling, Data modeling, Algorithm development
A common constraint in synthetic data generation is the need to evaluate time and resource intensive equations to model physical systems of interest. In fact, many times one needs to evaluate many such models to build up to the real system of interest. In some cases, it is possible to identify a key set of independent variables that govern the equations of interest, and one can build a look up table for interpolation. However, the down side to this strategy is that many computational resources will be spent computing values that may not be used during a simulation. In this paper, we present a new strategy to lazily evaluate complex calculations to build these multi-dimensional look up tables as needed. The technique relies on identifying the fact that some models are able to reuse partial calculations to generate multiple results in a single invocation. This allows generating a base table in the neighborhood of the initial point of interest. After which, the table is grown as the parameter space expands. This reduces the initial computational cost, and the resultant table can be saved for reuse if desired. In a multiprocessing environment, it would also be possible to generate additional table entries in parallel if those points of interest are known in advance. As a specific example, we apply this technique to computing atmospheric corrections for synthetic image generation.
In operational environments, pilots of rotorcraft such as the Apache, Blackhawk, and other variants have been involved in catastrophic accidents, due to pilots’ inability to rely on visual indicators for landing. In 2019, the Army reported that over the past 10 years, there have been 87 rotorcraft accidents due Degraded Visual Environments (DVE)—resulting in 122 fatalities and over $1.18B in material losses [1]. This phenomenon poses a formidable hazard to advanced tilt rotor platforms. Dust clouds, rain, and other meteorological effects can obscure or degrade instrument readings and make it difficult for pilots to navigate safely—especially during takeoff and landing. In these types of degraded visibility environments, pilots must depend on instruments for situational awareness, making accurate sensing and reporting crucial to a real-time understanding of the environment. This research is intended to address gaps in the rotorcraft Hardware-InThe-Loop (HWIL) DVE simulation systems currently in use. Specifically, the research is intended to produce a physicsbased realistic representation of DVE conditions for a HWIL simulator to demonstrate the impact of DVE on sensor emulator performances. DVE testing in a simulated environment requires a representation of rotor induced aerosol concentration around the aircraft. Additionally, the simulation requires the ability to visualize the degradation of sensor performance by rotor-induced aerosols [2]. Having end-to-end control over the physical model, it is possible to extend the effects of DVE on sensors beyond just textures and statistical models to physics-based models.
One major struggle for modeling and simulation (M and S) over the past decades has been the development of individual models in isolation. Typically, models are developed for a single application area where they tend to become domain specific as the complexity of a single model grows. When a future application requires interaction of multiple M and S approaches that have developed independently, it is difficult, if not impossible, for the models to integrate into a common environment. Furthering this difficulty is that the models have likely developed disparate concepts of the world in which they operate. A prime example of this effect is the development of infrared (IR) and radio frequency (RF) models, which have different large scale phenomenology and have, therefore, developed as separate M and S domains. Attempting to combine the two modalities through integration of existing M and S tools specific to each application domain has historically proven nigh impossible. These factors led to the development of the Dynamic Model Integration and Simulation Engine (DMISE) which provides a flexible and extensible framework for integration of different models into a common simulation by defining the interfaces for the simulation components. For multi-spectral IR and RF simulations, the General High-Fidelity Omni-Spectral Toolbox (GHOST) has been built on the DMISE framework to allow for integration of models across the electromagnetic spectrum. This paper presents GHOST and the status of the current effort to provide a true multi-spectral, multi-sensor, and multi-actor M and S environment through simulation of scenarios with combined IR and RF sensors operating in a common environment.
A key component of a night scene background on a clear moonless night is the stellar background. Celestial objects affected by atmospheric distortions and optical system noise become the primary contribution of clutter for detection and tracking algorithms while at the same time providing a solid geolocation or time reference due to their highly predictable motion. Any detection algorithm that needs to operate on a clear night must take into account the stellar background and remove it via background subtraction methods. As with any scenario, the ability to develop detection algorithms depends on the availability of representative data to evaluate the difficulty of the task. Further, the acquisition of measured field data under arbitrary atmospheric conditions is difficult if not impossible. For this reason, a radiometrically accurate simulation of the stellar background is a boon to algorithm developers. To aid in simulating the night sky, we have incorporated a star-field rendering model into the Georgia Tech Simulations Integrated Modeling System (GTSIMS). Rendering a radiometrically accurate star-field requires three major components: positioning the stars as a function of time and observer location, determining the in-band radiance of each star, and simulating the apparent size of each star. We present the models we have incorporated into GTSIMS and provide a representative sample of the images generated with the new model. We then demonstrate how the clutter in the neighborhood of a pixels change by including a radiometrically accurate rendering of a star-field.
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.