•  
  •  
 

Abstract

Although 3D face generation is extensively studied in computer vision, most existing methods prioritize reconstructing 3D geometry from available 2D or 3D inputs rather than generating novel faces directly from latent representations. To bridge this gap, we present the application of Adversarial Volumetric Convolutional Neural Networks (AVCNN), a tailored adaptation of the vanilla 3D Generative Adversarial Network (3D-GAN), to 3D face generation using latent space Gaussian embeddings. We first assemble a custom 3D facial dataset to provide the requisite facial characteristics and to ensure sufficient coverage of geometric variation across identities. The generator, implemented as a decoder, maps latent space Gaussian embeddings to volumetric 3D faces, which enables the direct synthesis of complete facial shapes without requiring intermediate reconstruction cues. In parallel, a discriminator component is trained adversarially to refine realism and structural detail, thereby improving overall fidelity and reducing artifacts. Quantitative evaluation using the Chamfer distance yields an average of 2.78, indicating a strong correspondence between generated and real 3D faces across the dataset. Taken together, these results validate the capacity of the proposed AVCNN framework to produce high-fidelity, randomly generated 3D faces with consistent quality. The approach provides a generalizable generative solution that can be readily applied to a range of downstream domains, including virtual reality, gaming, biometric authentication, and personalized avatar creation, where reliable synthesis of diverse 3D facial geometry is essential. This study is a domain-specialized adaptation of the vanilla 3D-GAN for unconditional 3D face synthesis from Gaussian latent embeddings, validated as a feasibility-oriented proof of concept.

Share

COinS