top of page
Search
  • Writer's picturevrishbhanu28

Learning 3d Representations in Function Space from 2d Images

In this project, we have learned 3D representations in a function space from 2D renders. We have implemented and experimented with the methods proposed in the academic paper: [Occupancy Networks - Learning 3D Reconstruction in Function Space] as well as some ideas from the blog post: [Implicit Decoder Part 1: 3D Reconstruction].

We have used a preprocessed dataset from [Implicit Decoder], which contains the following data:

  • pixels: 2D renders from the 3D models. There are 24 renders for each 3D model.

  • points: Randomly sampled points from the 3D space.

  • values: The ground truth values for the occupancy of the voxel that represents the corresponding point.

  • voxels: The 3D models in voxel representation.

Using the occupancy net, we were able to create a 3d render of an object for just a 2d image.



Please navigate to the colab notebook [Here] to have a look at the code and a more in-depth explanation.

6 views0 comments

Komentarji


bottom of page