Hyperdimensional computing (HDC) is a type of machine learning algorithm but is not based on the ubiquitous artificial neural network (ANN) paradigm. Instead of neurons and synapses, HDC implements online learning via very large vectors manipulated to represent correlations among the various vectors, measured by a similarity metric. Yet this approach readily affords one-shot learning, transfer learning, and native error correction, which are standing challenges for traditional ANNs. Further, implementations using binary vectors {0,1} are particularly attractive for size, weight, and power (SWaP) constrained systems, particularly disposable robotics. The paper is the first to identify and formalize a method to completely clone trained hyperdimensional behavior vectors. Using shift maps, d-1 unique clones can be made from a parent vector of length d. Additionally, expeditionary robots with extraneous sensors were trained via HDC to solve a maze even when up to 75% of the sensors fed irrelevant data to the robot. Lastly, we demonstrated the resiliency of this encoding method to random bit flips and how different network topologies contribute to dynamic reprogramming of HDC robots. HDC is presented here though not to replace ANNs but to encourage integration of these complementary ML paradigms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.