The human visual systems cannot directly discern subtle but meaningful variations, such as the facial micro-expressions and the structural vibrations. Motion magnification techniques allow the naked eyes to perceive these variations. Current approaches on motion magnification generally belong to the Lagrangian and Eulerian perspectives. Nevertheless, the methods either require heavy computation or cannot extract tiny but valuable variations from noise disturbance. This paper utilizes multi-task learning to unite the Lagrangian and Eulerian perspectives for a novel motion magnification method. First, the method develops a multi-task network, which utilizes optical flow estimation to support motion magnification with accurate motion extraction. The homoscedastic uncertainty is then applied to balance task relation in the loss functions. In order to support multi-task learning, a simulated dataset is synthesized with real images from public datasets. Finally, the experimental results demonstrate that the proposed method exceeds the previous ones and optical flow can effectively support motion magnification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.