Feb. 9, 2024, 5:46 a.m. | Yi-Ting Pan Chai-Rong Lee Shu-Ho Fan Jheng-Wei Su Jia-Bin Huang Yung-Yu Chuang Hung-Kuo Chu

cs.CV updates on arXiv.org arxiv.org

The entertainment industry relies on 3D visual content to create immersive experiences, but traditional methods for creating textured 3D models can be time-consuming and subjective. Generative networks such as StyleGAN have advanced image synthesis, but generating 3D objects with high-fidelity textures is still not well explored, and existing methods have limitations. We propose the Semantic-guided Conditional Texture Generator (CTGAN), producing high-quality textures for 3D shapes that are consistent with the viewing angle while respecting shape semantics. CTGAN utilizes the disentangled …

3d models 3d objects advanced cs.cv entertainment entertainment industry fidelity generative generator image immersive industry limitations networks objects semantic synthesis texture visual

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne