Future Internet-of-Things (IoT) will be featured by ubiquitous and pervasive vision sensors that generate enormous amount of streaming videos. The ability to analyze the big video data in a timely manner is essential to delay-sensitive applications, such as autonomous vehicles and body-worn cameras for police forces. Due to the limitation of computing power and storage capacity on local devices, the fog computing paradigm has been developed in recent years to process big sensor data closer to the end users while it avoids the transmission delay and huge uplink bandwidth requirements in cloud-based data analysis. In this work, we propose an edge-to-fog computing framework for object detection from surveillance videos. Videos are captured locally at an edge device and sent to fog nodes for color-assisted L1-subspace background modeling. The results are then sent back to the edge device for data fusion and final object detection. Experimental studies demonstrate that the proposed color-assisted background modeling offers more diversity than pure luminance based background modeling and hence achieves higher object detection accuracy. Meanwhile, the proposed edge-to-fog paradigm leverages the computing resources on multiple platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.