Presentation + Paper
6 June 2024 Bridging the AI/ML gap with explainable symbolic causal models using information theory
Author Affiliations +
Abstract
We report favorable preliminary findings of work in progress bridging the Artificial Intelligence (AI) gap between bottom-up data-driven Machine Learning (ML) and top-down conceptually driven symbolic reasoning. Our overall goal is automatic generation, maintenance and utilization of explainable, parsimonious, plausibly causal, probably approximately correct, hybrid symbolic/numeric models of the world, the self and other agents, for prediction, what-if (counter-factual) analysis and control. Our old Evolutionary Learning with Information Theoretic Evaluation of Ensembles (ELITE2) techniques quantify strengths of arbitrary multivariate nonlinear statistical dependencies, prior to discovering forms by which observed variables may drive others. We extend these to apply Granger causality, in terms of conditional Mutual Information (MI), to distinguish causal relationships and find their directions. As MI can reflect one observable driving a second directly or via a mediator, two being driven by a common cause, etc., to untangle the causal graph we will apply Pearl causality with its back- and front-door adjustments and criteria. Initial efforts verified that our information theoretic indices detect causality in noise corrupted data despite complex relationships among hidden variables with chaotic dynamics disturbed by process noise, The next step is to apply these information theoretic filters in Genetic Programming (GP) to reduce the population of discovered statistical dependencies to plausibly causal relationships, represented symbolically for use by a reasoning engine in a cognitive architecture. Success could bring broader generalization, using not just learned patterns but learned general principles, enabling AI/ML based systems to autonomously navigate complex unknown environments and handle “black swans”.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Stuart W. Card "Bridging the AI/ML gap with explainable symbolic causal models using information theory", Proc. SPIE 13058, Disruptive Technologies in Information Sciences VIII, 1305802 (6 June 2024); https://doi.org/10.1117/12.3014447
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Information theory

Data modeling

Dynamical systems

Artificial intelligence

Machine learning

Navigation systems

Neodymium

Back to Top