It all started when my good friend Gordon told me about depth-player, a web tool to display Google Lens Blur images in 3d coded by Jaume Sánchez (Clicktorelease). As you probably know, a Google Lens Blur image is a container for a reference image and a depth map (and some other parameters like focal length, near plane, far plane, and maybe more). It's cool and all but it would be real nice if you could simply input a reference image and a depth map instead of having to combine them into a Google Lens Blur image (using depthy.me).
Here comes Depth Player where the input is a reference image and its associated depth map. The depth map is assumed to be white for near objects and black for far objects, the exact opposite of a Google Lens Blur depth map. Not a big change from Jaume's version but I think it makes the depth map display tool more general and more useful.
The 3d model is displayed using central/perspective projection. The projection center is the camera center. The distance from the image plane to the camera center is equal to the focal length. The 3d points are obtained by considering, for each 2d point of the reference image in the image plane, a ray emanating from the camera center and passing through the 2d point. The position of the 3d point along the ray is determined by its depth as recorded in the depth map. To get the proper location (within scale), you need to know where the near and far planes are along the axis that goes through the camera center and the reference image center in the image plane (aka the principal axis). You also need to know the focal length. The math is very similar to what's happening in Point Cloud Maker 11 (PCM11) since I am basically doing the same thing. The 3d scene is basically encased in a pyramid whose four side edges pass through the camera center (the center of projection) and one corner of the reference image.
In Depth Player, you can choose between the following display modes:
- Solid,
- Point cloud, and
- Wireframe.
Personally, I find the "Point cloud" display to be the most useful. The problem with "Solid" and "Wireframe" render modes is that objects at depth discontinuities are connected in the triangular mesh that's used to display the 3d scene. As a result, you end up with bad color interpolation at depth discontinuities.
In Depth Player, you can adjust:
- Focal distance/length,
- Near plane,
- Far plane,
- Smooth mesh,
- Quad size,
- Point size, and
- Downsampling.
Any time you change a parameter, you need to click on "Create model" to see the changes. If you want to go back to the default settings, you have to reload the Depth Player page.
The focal distance, near plane, and far plane need to be adjusted so that the 3d scene that is displayed corresponds to reality (as best as possible).
The parameter "Smooth mesh" controls how smooth the triangular mesh is. The larger "Smooth mesh" is, the smoother the triangular mesh is, that is, the more smoothing has been applied to the depth map. As with all things, smoothing should be done in moderation. By default, there is no smoothing applied.
The parameter "Quad size" controls how fine the quadrangular mesh is. Note that the triangular mesh is obtained from the quadrangular mesh by splitting each quad into 2 triangles. The smaller "Quad size" is, the more refined the mesh is. You can clearly see that when you switch the display to "Wireframe". Bear in mind that the lower "Quad size" is set to, the longer it takes to create the 3d model. Note that "Quad size" has no effect when in "Point cloud" display mode. Personally, I would leave that parameter alone (set to 1.0) and use the "Downsampling" parameter instead to control the refinement of the 3d scene.
The "Point size" parameter controls the size of the points when the display is switched to "Point cloud".
The "Downsampling" parameter controls by how much the original reference image is downsampled (reduced in size) prior to generating the 3d scene. The larger "Downsampling" is set to, the coarse the mesh (and therefore the rendered scene) is gonna be. If "Downsampling" is set to 1, there is no downsampling performed on the reference image. This has an effect in all render modes. If you are in "Point cloud" mode, this is the only way to control the number of points in the 3d scene. Bear in mind that the lower "Downsampling" is set to, the longer it takes to create the 3d model.
The original depth-player (the one Jaume wrote) had an option to upload the 3d scene to sketchfab and an option to download an .obj file. I removed the direct upload option to sketchfab (don't like the idea of direct upload from one site to another). I left the option to save the 3d model as an .obj file which you can certainly upload to sketchfab if you want.
Here's a quick video showing Depth Player in action:
Here's another more in-depth video showing depthplayer in action:
All credit goes to Jaume Sánchez for his original depth-player. All I did was allow the loading of a reference image + depth map instead of a Google Lens Blur image. I also changed a few other things but nothing too involved.