The object tracking accuracy may be decreased because of the camera jitter, making it extremely hard for object tracking and trajectory analyzation. To achieve accurate video stabilization, the movement of camera can be analyzed and predicted based on the previous camera jitter sequence. In the area of sequence prediction, the long-short term memory (LSTM) network shows the potential in sequence forecasting, here we use LSTM network in camera jitter prediction and video stabilization. In this paper, we propose a video stabilization algorithm based on multi-region grey projection method and LSTM encoder-decoder network. Our algorithm calculates the motion of the camera through the gray projection of four areas in each frame, then filters out the main movement direction and jitter of the camera. The LSTM encoder-decoder network receives the camera jitter sequence, predicts the camera jitter then stabilizes the video. We to verify the performance of the proposed video stabilization method. We tested the proposed video stabilization algorithm on the jitter videos, which is made by the VisDrone dataset video modified with our recorded camera jitter. Experimental results demonstrate that the proposed method can achieve the video stabilization in real time, and increase the accuracy of object tracking and trajectory analyzation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.