Face inpainting is a challenging task in computer vision. Although deep learning-based methods that apply attention mechanism or utilize prior knowledge could reconstruct facial components, they may produce visual artifacts or lack detail texture. To solve mainly these two problems, we propose a multicolumn gated convolutional network (MGCN). MGCN is composed of three parallel branches with gated convolution to dynamically extract multispatial features, which could help to improve the global semantic coherence and achieve more effective performance in irregular mask. Specifically, for generating more plausible texture, we developed a diversified perceptual Markov random field to search correct feature patches in global rather than local images. Experiments on CelebA-HQ face and Flickr-Faces-HQ datasets demonstrate that MGCN achieves a more competitive performance than the state-of-the-art methods. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 1 scholarly publication.
Convolution
Photonic integrated circuits
Image segmentation
Gallium nitride
Visualization
Computer programming
Lithium