Categories

3D

Pricing

Free

Features

No Signup Required
Open Source

GET3D (Nvidia)

GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images. Train performant 3D generative models.

As virtual and augmented reality continue to become increasingly prevalent, the demand for high-quality 3D content is on the rise. Creating such content, however, is a time-consuming and resource-intensive process, often requiring a team of skilled artists and engineers. In recent years, the development of generative models for 3D content creation has shown promising results, but there are still limitations to the quality and diversity of the generated content. In this article, we introduce GET3D, a generative model that generates explicit textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures. In this article, we will explore the different features of GET3D, including its ability to generate diverse shapes, disentangle geometry and texture, and generate novel shapes, among others.

Training GET3D: A Brief Overview

The process of training GET3D involves generating a 3D signed distance field (SDF) and a texture field via two latent codes. From there, the model utilizes DMTet to extract a 3D surface mesh from the SDF and query the texture field at surface points to obtain colors. Adversarial losses defined on 2D images are used for training, with a rasterization-based differentiable renderer utilized to obtain RGB images and silhouettes. Two 2D discriminators are employed, each on RGB image and silhouette, respectively, to classify whether the inputs are real or fake. The entire model is end-to-end trainable, allowing for the synthesis of high-quality 3D textured meshes.

Generating 3D Assets

One of the primary strengths of GET3D is its ability to generate diverse shapes with arbitrary topology and high-quality geometry and texture. The model can generate a wide range of shapes, including cars, chairs, animals, motorbikes, human characters, and buildings. The generated assets demonstrate significant improvements over previous generative models for 3D content creation. These assets can be directly consumed by 3D rendering engines, making them immediately usable in downstream applications.

Disentanglement between Geometry and Texture

GET3D achieves a good disentanglement between geometry and texture, as demonstrated by the ability to generate shapes with the same geometry latent code but different texture latent codes, and vice versa. This disentanglement allows for more fine-grained control over the generated content, making it easier to modify specific aspects of the generated shapes.

Latent Code Interpolation and Novel Shape Generation

GET3D also allows for meaningful interpolation between different latent codes. By applying a random walk in the latent space, the model is able to generate a smooth transition between different shapes for all categories. Local perturbations to the latent code can also be used to generate similar-looking shapes with slight differences. Furthermore, GET3D can generate novel shapes that are not present in the training data, making it a useful tool for generating new and unique content.

Unsupervised Material Generation and Text-guided Shape Generation

GET3D can also generate materials and produce view-dependent lighting effects in a completely unsupervised manner when combined with DIBR++. Additionally, the model can be fine-tuned with user-provided text prompts using a directional CLIP loss to generate a large amount of meaningful shapes. This text-guided shape generation allows for the creation of specific shapes based on user input, making it a useful tool for content creation.

Conclusion

The development of 3D generative models that can produce high-quality textured meshes with complex topology and geometric details has become increasingly important in various industries that require content creation tools capable of scaling in terms of quantity, quality, and diversity of 3D content. GET3D is a generative model introduced in a recent paper by a team of researchers from NVIDIA, the University of Toronto, and the Vector Institute. The model synthesizes explicit textured 3D meshes directly from 2D image collections by combining recent advances in differentiable surface modeling, differentiable rendering, and 2D Generative Adversarial Networks.

GET3D generates diverse shapes with arbitrary topology, high-quality geometry, and texture. The model achieves a good disentanglement between geometry and texture, enabling users to manipulate each independently. GET3D is also able to generate a smooth transition between different shapes, generate similar looking shapes with slight differences locally, and produce materials and view-dependent lighting effects in a completely unsupervised manner. Additionally, the model can generate meaningful shapes with text prompts from users.

GET3D builds upon several previous works, including learning deformable tetrahedral meshes for 3D reconstruction, extracting triangular 3D models, materials, and lighting from images, and deep marching tetrahedra, among others. The model achieves significant improvements over previous methods and has potential applications in gaming, virtual reality, film production, and architecture, among other industries. For business inquiries, NVIDIA offers a research licensing program on their website.

Report abuse
© 2023 aitoolshunter.com. All rights reserved.