Hüseyin Temiz


ML/CV Engineer @Yapı Kredi Teknoloji
PhD Cand. @CmpE Boğaziçi Uni.
Share: 

3d Gaussian Splats Avatars Utilizing Uv Mapping

In this blog post, we examine how four recent methods utilize UV mapping within Gaussian Splat avatar pipelines. Many modern approaches convert the 3D mesh representation into a 2D UV-mapped domain to address the limitations of working directly with 3D meshes. By leveraging UV mapping, these pipelines achieve more uniform sampling, intuitive 2D editing, and efficient data compression—advantages not easily attainable with raw 3D geometry. Each method in the literature integrates UV mapping differently to maximize these benefits. Here, we explore and compare their strategies.

UV Mapping in Each Method

Method How UV Is Used
FlashAvatar [1] Canonical field initialization by sampling Gaussians uniformly over the UV map yields a far more even distribution across face, hair, and neck regions.
Mesh-driven motion uses UV→barycentric mapping to attach Gaussians to mesh and apply dynamic offsets.
FATE [2] Sampling-based densification ensures optimal Gaussian placement by sampling uniformly in UV space.
Neural baking converts discrete Gaussians into continuous attribute maps (position, color, opacity) in UV, enabling intuitive texture-style editing of the splats.
GEM [3] Gaussian eigenbasis construction: a UNet predicts multi-channel Gaussian parameter maps in the FLAME UV domain, where each pixel encodes one splat’s full parameters.
Linear distillation builds compact eigenbases over these UV maps for real-time synthesis.
MeGA [4] UV displacement map driven by FLAME parameters captures per-vertex geometric detail.
Neural textures (diffuse, view-dependent, dynamic) live in UV space for high-fidelity appearance.
Deferred rendering rasterizes the refined UV mesh + textures before blending with Gaussian hair.

Similarities & Differences

Use of UV Space

All four methods decouple 3D Gaussian placement and attribute editing by projecting onto a shared 2D UV domain, avoiding the irregular sampling and distortion issues of raw mesh faces.

Mesh Dependence

Except for pure mesh-driven strategies, all UV-based pipelines still rely on a parametric mesh (usually FLAME) to establish UV→3D correspondences and drive animation via blendshapes or skinning.

Editing Workflow

UV domains unlock powerful 2D editing: painting, diffusion-model edits, or neural filters—far more intuitive than manipulating unstructured 3D point clouds.

Advantages of UV Mapping over Direct Mesh Sampling

  1. Uniform Density Control UV sampling lets you enforce an even coverage of splats (e.g., FlashAvatar), sidestepping the uneven vertex spacing or face-size distortions of direct 3D sampling.

  2. 2D Editing Toolchain Attribute maps in UV space can be edited with mature 2D tools—painting, diffusion models, or neural filters—making appearance tweaks as straightforward as texture work on polygonal models.

  3. Compact Representation & Acceleration UV maps lend themselves to image-based compression or CNN/UNet processing (as in GEM), enabling low-dimensional bases and lightning-fast inference on commodity hardware.

  4. Decoupling Geometry & Appearance By separating UV displacement for shape from UV textures for color and dynamics (MeGA), pipelines simplify rendering and editing complexity.

  5. Stable Barycentric Mapping Sampling in UV then projecting via barycentric coordinates (FATE) avoids mesh discretization artifacts and ensures smooth, differentiable optimization of Gaussian parameters.

Conclusion

In sum, UV parameterizations transform complex 3D Gaussian splat operations into well-behaved 2D workflows, yielding more uniform sampling, intuitive editing, and powerful opportunities for compression and acceleration that mesh-only approaches simply cannot match.

References

[1] FlashAvatar: Xiang, Jun, et al. “Flashavatar: High-fidelity head avatar with efficient gaussian embedding.” CVPR 2024.

[3] FATE: Zhang, Jiawei, et al. “Fate: Full-head gaussian avatar with textural editing from monocular video.” CVPR 2025.

[4] GEM: Zielonka, Wojciech, et al. “Gaussian eigen models for human heads.” CVPR 2025.

[2] MEGA: Wang, Cong, et al. “Mega: Hybrid mesh-gaussian head avatar for high-fidelity rendering and head editing.” CVPR 2025.