HumanGen: Generating Human Radiance Fields with Explicit Priors

3D human generation scheme with detailed geometry and 360◦ realistic free-view rendering.


Suyi Jiang, Haoran Jiang, Ziyu Wang, Haimin Luo, Wenzheng Chen, Lan Xu

Paper

Recent years have witnessed the tremendous progress of 3D GANs for generating view-consistent radiance fields with photo-realism. Yet, high-quality generation of human radiance fields remains challenging, partially due to the limited human-related priors adopted in existing methods. We present HumanGen, a novel 3D human generation scheme with detailed geometry and 360◦ realistic free-view rendering. It explicitly marries the 3D human generation with various priors from the 2D generator and 3D reconstructor of humans through the design of “anchor image”. We introduce a hybrid feature representation using the anchor image to bridge the latent space of HumanGen with the existing 2D generator. We then adopt a pronged design to disentangle the generation of geometry and appearance. With the aid of the anchor image, we adapt a 3D reconstructor for fine-grained details synthesis and propose a two-stage blending scheme to boost appearance generation. Extensive experiments demonstrate our effectiveness for state-of-the-art 3D human generation regarding geometry details, texture quality, and free-view performance. Notably, HumanGen can also incorporate various off-the-shelf 2D latent editing methods, seamlessly lifting them into 3D.

Pipeline


Our key idea in HumanGen is to organically leverage a 2D human generator and a 3D human reconstructor as explicit priors into a 3D GAN-like framework. HumanGen consists of three modules. The hybrid feature module includes anchor image and tri-plane feature generation. The geometry module includes reconstruction prior and SDF adaptation. Texture module includes sphere tracing based volume rendering, texture and blending weight fields, and two-stage blending.

Results


Our HumanGen enables photo-realistic human generation with detailed geometry and 360◦ free-view rendering ability, learned from 2D images.

Comparison


Application


HumanGen maintains the compatibility to existing off-the-shelf 2D editing toolbox based on latent disentanglement. Thus, using the anchor image, we can seamlessly upgrade off-the-shelf 2D latent editing methods into our 3D setting.

1. Pose editing with style mixing. Achieve 3D results with style mixing on 2D images.

2. Real image inversion. Real images can be inverted to latent space to generate 3D results.

3. Edit with latent space. Editing on 2D clothes can be seamlessly upgraded to 3D.

4. Text-guided generation. 3D generation from 2D text-guided generation.

Related Links


We compare our method with three NeRF-based generation methods StyleSDF, EG3D and GNARF. Related 2D GAN method is StyleGAN-Human. Related reconstruction methods are PIFu and PIFuHD.

Bibtex


@inproceedings{jiang2023humangen, title={Humangen: Generating human radiance fields with explicit priors}, author={Jiang, Suyi and Jiang, Haoran and Wang, Ziyu and Luo, Haimin and Chen, Wenzheng and Xu, Lan}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={12543--12554}, year={2023} }