SphereHead: Stable 3D Full-head Synthesis with
Spherical Tri-plane Representation

1The Chinese University of Hong Kong, Shenzhen   2University of Wisconsin-Madison

Abstract

While recent advances in 3D-aware Generative Adversarial Networks (GANs) have aided the development of near-frontal view human face synthesis, the challenge of comprehensively synthesizing a full 3D head viewable from all angles still persists. Although PanoHead proves the possibilities of using a large-scale dataset with images of both frontal and back views for full-head synthesis, it often causes artifacts for back views. Based on our in-depth analysis, we found the reasons are mainly twofold. First, from network architecture perspective, we found each plane in the utilized tri-plane/tri-grid representation space tends to confuse the features from both sides, causing "mirroring" artifacts (e.g., the glasses appear in the back). Second, from data supervision aspect, we found that existing discriminator training in 3D GANs mainly focuses on the quality of the rendered image itself, and does not care much about its plausibility with the perspective from which it was rendered. This makes it possible to generate "face" in non-frontal views, due to its easiness to fool the discriminator. In response, we propose SphereHead, a novel tri-plane representation in the spherical coordinate system that fits the human head's geometric characteristics and efficiently mitigates many of the generated artifacts. We further introduce a view-image consistency loss for the discriminator to emphasize the correspondence of the camera parameters and the images. The combination of these efforts results in visually superior outcomes with significantly fewer artifacts.

Artifacts Addressed

Two main types of face artifacts addressed in this work. All cases are sampled from PanoHead's latent space. (a-b) We name the first type as mirroring-face artifacts, due to the back face mirroring the identity, expression and accessories of the front face precisely. (c-d) We name the second type as multiple-face artifacts, because in this scenario there might be more than one fake faces and their identities, expression and accessories are different from the front face.

Dual Spherical Tri-plane Representation

(a) Tri-plane representation. (b) Spherical tri-plane representation. Reconstructed head geometry from single (c) sphere A and (d) sphere B, each showing (i) seam artifacts and (ii) polar artifacts. (e) The combination of two spheres in dual spherical tri-plane representation, (i) the seam of sphere A, (ii) the seam of sphere B. (f) Fusion weight map. (g-h) For each sphere, the weight approaches zero as the locations near the seam and poles.

Framework

The framework of our proposed SphereHead. Given a sampled code \(z\) and camera parameter \(c\), SphereHead synthesizes a spherical tri-plane features \(f_F\) by fusing two sub-feature groups \(f_A\) and \(f_B\). By volumetric rendering with the sampled features in \(f_F\), SphereHead generates high-quality view-consistent full head images \(I^{+}\). The discriminator learns to focus on the alignment between images and their viewpoints instructed by our view-image consistency loss, by introducing an additional negative data pairs consisting the real images and mismatched labels \(c_s\).

Baseline Comparison

Qualitative comparison with state-of-the-art methods. (a) GIRAFFHD, (b) StyleSDF, (c) EG3D fail to capture the complete head geometry and appearance. (d-f) PanoHead shows complete head generation, but the results suffer from mirroring artifacts ((d) left-right identical mirroring artifacts and (f) mirroring-face artifacts) and (e) multiple-face artifacts). (g-l) Ours SphereHead synthesizes full-head images of high visual quality and is free of artifacts exhibited by other methods.

Citation


The website template was borrowed from Michaël Gharbi and Ref-NeRF.