LTM: Lightweight Textured Mesh Extraction and Refinement of Large Unbounded Scenes for Efficient Storage and Real-time Rendering

Jaehoon Choi1, Rajvi Shah2, Qinbo Li2, Yipeng Wang2, Ayush Saraf2, Changil Kim2, Jia-Bin Huang1,2, Dinesh Manocha1, Suhib Alsisan2, Johannes Kopf2
1University of Maryland 2Meta

CVPR 2024

Abstract

Advancements in neural signed distance fields (SDFs) have enabled modeling 3D surface geometry from a set of 2D images of real-world scenes. Baking neural SDFs can extract explicit mesh with appearance baked into texture maps as neural features. The baked meshes still have a large memory footprint and require a powerful GPU for real-time rendering. Neural optimization of such large meshes with differentiable rendering pose significant challenges. We propose a method to produce optimized meshes for large unbounded scenes with low triangle budget and high fidelity of geometry and appearance. We achieve this by combining advancements in baking neural SDFs with classical mesh simplification techniques and proposing a joint appearance-geometry refinement step. The visual quality is comparable to or better than state-of-the-art neural meshing and baking methods with high geometric accuracy despite significant reduction in triangle count, making the produced meshes efficient for storage, transmission, and rendering on mobile hardware. We validate the effectiveness of the proposed method on large unbounded scenes from mip-NeRF 360, Tanks & Temples, and Deep Blending datasets, achieving at-par rendering quality with 73× reduced triangles and 11× reduction in memory footprint.

Video

Method Overview

Nerfies

We start training both the appearance fcolor and geometry fsdf representations for a large-scale scene using volumetric rendering (shown in orange). Then, our method extracts the mesh from fsdf and significantly simplifies its structure, resulting in the lightweight mesh M (shown in blue). Finally, we jointly train vertex deformation Δx and fcolor computed from appearance modeling.

Interactive Rendering Demo in VR

Related Links

The renderer is largely based on Jiaxiang Tang's great work NeRF2Mesh. We also borrow Nerfies for their excellent website templates.

BibTeX

@inproceedings{choi2024ltm,
    title = {LTM: Lightweight Textured Mesh Extraction and Refinement of Large Unbounded Scenes for Efficient Storage and Real-time Rendering},
    author = {Jaehoon Choi and Rajvi Shah and Qinbo Li and Yipeng Wang and Ayush Saraf and Changil Kim and Jia-Bin Huang Dinesh Manocha and Suhib Alsisan and Johannes Kopf},
    booktitle = {CVPR},
    year = {2024}
}