NERF model

model genera

HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion

paper

overfit the MLP to describe the 3d mesh then use the diffustion model to lean the associate of weightbaisand the string

training time

3-layer 128-dim MLPs contain ≈ 36k parameters, which are flattened and tokenized for diffusion. We use an Adam optimizer with batch size 32 and initial learning rate of , which is reduced by 20% every 200 epochs. We train for ≈ 4000 epochs until convergence, which takes ≈ 4 days on a single A6000 GPU

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

link

use GAN and NERF to ganerative the 3D model

generator : input direaction and position and style latent space vector return color and density;
defector: input the image that render by volume rendering by genertor and real image predict loss;

nvdiffrec

link

use instant-nerf as nerf base and use DMTet to reconstruct the 3D mesh. The render image by 3D mesh and neural general PBR texture compare to the origin sample .

NeRFNeR:Neural Radiance Fields with Reflections

link use 2 mlp the reflection map and surface color;

mics

LERF: Language Embedded Radiance Fields

link

CLIP

decode text or image to the same latent space

Multiscale CLIP Preprocessin

slice traing image to diffrent size of image and use as ground truth

LERF

input :image
output :rgb、density 、DINO featureCLIP feature
By find cosine similarity the input text CLIP featureand the LERF predict CLIP feature can show the connection between the text and NERF scene;