It is known that actual performance of most previous face hallucination approaches will drop dramatically as a very low-resolution tiny face is provided. Inspired by the latest progress in deep unsupervised learning, this paper works on tiny faces of size 16×16 pixels and magnifies them into their 8× upsampling ones by exploiting the boundary equilibrium generative adverarial networks (BEGAN). Besides imposing a pixel-wise L2 regularization term to the generative model, it is found that our targeted auto-encoding generator with residual blocks and skip connections is a key component for BEGAN achieving state-of-the-art hallucination performance. The cropped CelebA face dataset is preliminarily used in our experiments. The results demonstrate that the proposed approach is not only of fast and stable convergence, but also robust to pose, expression, illuminance and occluded variations.
W. Shao, Jing-jing Xu, Long Chen
Journal name not available for this finding