Artificial Intelligence applications are increasingly making use of vector embedding techniques to achieve impressive results in many application domains. Semantic Vector Spaces (SVS’s) are constructed using semantic vector embedding techniques that learn vector representations of data across multiple domains. An important application area enabled by such techniques is the capability to represent software services and service workflows as semantic hypervectors. Previous work has shown how these hypervector representations have significant advantages over alternative schemes for decentralized service workflow construction, particularly in low communications bandwidth and energy constrained environments that are typical in multi-domain operations. SVS construction usually assumes that all the data required to construct the semantic vector space is available centrally. However, in multi-domain operations different partners may not be willing to share the training data necessary to construct a common multi-domain SVS. Hence semantic hypervectors representing similar services or workflows but constructed from different training data by different partners cannot be discovered and used. In this paper we focus on how it is possible to map semantic hypervectors between partner SVS’s, so that complementary services and workflows developed and owned by different partners can be discovered and used to achieve mission goals. The paper describes techniques for generating the required mapping that require a minimum exchange of information between the different partners; demonstrate how it is possible to do this for semantic hypervectors that use different types of encoding (eg., real valued, binary, sparse slot-encoding); and illustrates how the mapping can be implemented in various multi-domain operational settings.
|