Automatic depth map generation, stereo matching, multi-view stereo, Structure from Motion (SfM), photogrammetry, 2d to 3d conversion, etc. Check the "3D Software" tab for my free 3d software. Turn photos into paintings like impasto oil paintings, cel shaded cartoons, or watercolors. Check the "Painting Software" tab for my image-based painting software. Problems running my software? Send me your input data and I will do it for you.
Friday, April 3, 2015
3D Photos - Friendly bunch of geese
Left image.
Right image.
Thanks to Peter Simcoe for the original stereo pair. These are 400 pixels wide. The minimum and maximum disparities are -17 and 7. The disparity range is 24, which is about 1/16 of the width (400 pixels). That's a bit much (The disparity range is best at about 1/30 of the width.) Here, I am gonna use Depth Map Automatic Generator 2 to get the depth map because I recently overhauled it (it's significantly faster although still slow compared to DMAG5).
Radius = 9, gamma proximity = 17, gamma color = 12, alpha = 0.9, truncation (color) = 30, truncation (gradient) = 2, disparity tolerance = 0. Not sure why I get those artifacts on both sides. Probably, because the alignment is far from perfect even though the images were auto-aligned in StereoPhoto Maker.
Radius = 9, gamma proximity = 10000, gamma color = 10000, alpha = 0.9, truncation (color) = 30, truncation (gradient) = 2, disparity tolerance = 0. By setting gamma proximity and gamma color to infinity, DMAG2 behaves like a classic weight-less window-based local stereo matching algorithm, which always fattens object boundaries. I gotta say though that it's not bad at all especially considering that the depth map is gonna have be tweaked anyway in depthy.me or whatever else. In the interface, you can't go past 99 for gamma proximity or gamma color, but I thought that was interesting to see what would happen if they were pushed to infinity. I may have to implement a (hopefully fast) weight-less classic window-based stereo matching algorithm in the near future (probably under the DMAG6 moniker).
Radius = 9, gamma proximity = 17, gamma color = 12, alpha = 0.9, truncation (color) = 256, truncation (gradient) = 100, disparity tolerance = 0. By setting the truncation (color or gradient) to something large, the raw matching cost (color or gradient) is actually not truncated. Truncation is usually put in place to avoid artificially large matching cost when pixels are occluded.
Radius = 12, gamma proximity = 17, gamma color = 24, alpha = 0.9, truncation (color) = 256, truncation (gradient) = 100, disparity tolerance = 0. Here, I have increased gamma color to get a behavior a bit closer to the classic weight-less window-based stereo matching algorithm. I have also sneaked in an increase in the window radius.
Radius = 12, gamma proximity = 17, gamma color = 36, alpha = 0.9, truncation (color) = 256, truncation (gradient) = 100, disparity tolerance = 0. You don't want to increase gamma color too much otherwise you'll start to get significant fattening of object boundaries.
That's the depth map I uploaded to depty.me (after having doubled its size to 800 pixels and inverted the colors).
That's the depth map I got after doing a little bit of painting in depthy.me.
That's the animated gif created by depthy.me.
Wednesday, April 1, 2015
3D Photos - Las Vegas Stratosphere observation deck
Here's a cool stereo pair taken by Peter Simcoe. This is a scaled down version of the original stereo pair (width = 1200 pixels). Getting a good depth map from that stereo pair is not gonna be easy because you have (in the foreground) windows through which you see the ground below at infinity and window supports with no texture.
Left image.
Right image.
Depth map obtained with Depth Map Automatic Generator 5 (DMAG5) using min disparity = -20, max disparity = 20, window radius = 48, alpha = 0.5, truncation (color) = 20, truncation (gradient) = 10, epsilon = 3, smoothing iterations = 1, and disparity tolerance = 4. It's not the greatest but it's a good starting point.
I loaded the left image and the inverted depth map into depthy.me and did a little bit of painting on the depth map.
Inverted depth map after painting in depthy.me. I wish there was a way to paint along a straight line but there is none (That would have helped tremendously to paint in the windows.) Note that the depth map that depthy.me saved is not 1200 pixels wide, but 1000 pixels (that's a bit weird).
That's the animated gif depthy.me produced. As you can see, the inpainting that takes place is not the greatest even though the depth map is quite good (after painting). Not much that can be done about that since inpainting requires interpolation. Of course, the more you move the worse it gets.
Left image.
Right image.
Depth map obtained with Depth Map Automatic Generator 5 (DMAG5) using min disparity = -20, max disparity = 20, window radius = 48, alpha = 0.5, truncation (color) = 20, truncation (gradient) = 10, epsilon = 3, smoothing iterations = 1, and disparity tolerance = 4. It's not the greatest but it's a good starting point.
I loaded the left image and the inverted depth map into depthy.me and did a little bit of painting on the depth map.
Inverted depth map after painting in depthy.me. I wish there was a way to paint along a straight line but there is none (That would have helped tremendously to paint in the windows.) Note that the depth map that depthy.me saved is not 1200 pixels wide, but 1000 pixels (that's a bit weird).
That's the animated gif depthy.me produced. As you can see, the inpainting that takes place is not the greatest even though the depth map is quite good (after painting). Not much that can be done about that since inpainting requires interpolation. Of course, the more you move the worse it gets.
3D Photos - At the music shop
The following is a mpo stereo picture that given to me by Peter Simcoe. Originally, it was 3567x2016, which is way too big for generating a depth map, so the width got reduced to 1200. The minimum disparity and maximum disparities were estimated at -32 and 18, respectively.
Left image.
Right image.
What I did is run Depth Map Automatic Generator 5 (DMAG5), changing parameters here and there, and then pick (what I thought was) the best depth map.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 0.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 1.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 2.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 4, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 3, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 2, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 1, smoothing iterations = 1, disparity tolerance = 3.
Radius = 36, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 1, smoothing iterations = 1, disparity tolerance = 3.
Radius = 12, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 1, smoothing iterations = 1, disparity tolerance = 3.
This last depth map is the one I am gonna use to created an animated 3d gif in depthy.me. First, it needs to be inverted so that black is foreground and white is background (depthy.me wants it that way for some reason).
Depth map after color inversion.
What I do in depthy.me is load up the left image and the depth map, and then touch up the depth map in areas that are clearly at the wrong depths (You don't have to touch up all of them, only the ones that produce weird artifacts when the gif is animated.) In the paint mode, I click on "Level" so that I can easily change the depths (It eye-drops the color where you first click on, in other words, it kinda enables you to paint at the right depth without having to eye-drop first.) By clicking on "Preview", you can see the effect of any change you made. If you made a mistake, it's easy to go back by clicking on "Undo". It's pretty primitive but it kinda works. Note that even if the depth map is 100% correct, the animation may still look weird because of the way depthy.me inpaints the areas that become visible. There's not much you can do about that since you would need the right image and the right depth map to more accurately inpaint.
Touched up (in depthy.me) depth map.
Animation produced by depthy.me. I chose "Large", "2s", "Horizontal", "Dramatic", "Near" for the gif creation options.
Left image.
Right image.
What I did is run Depth Map Automatic Generator 5 (DMAG5), changing parameters here and there, and then pick (what I thought was) the best depth map.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 0.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 1.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 2.
Radius = 12, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 7, truncation (gradient) = 2, epsilon = 4, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 4, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 3, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 2, smoothing iterations = 1, disparity tolerance = 3.
Radius = 24, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 1, smoothing iterations = 1, disparity tolerance = 3.
Radius = 36, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 1, smoothing iterations = 1, disparity tolerance = 3.
Radius = 12, alpha = 0.9, truncation (color) = 20, truncation (gradient) = 10, epsilon = 1, smoothing iterations = 1, disparity tolerance = 3.
This last depth map is the one I am gonna use to created an animated 3d gif in depthy.me. First, it needs to be inverted so that black is foreground and white is background (depthy.me wants it that way for some reason).
Depth map after color inversion.
What I do in depthy.me is load up the left image and the depth map, and then touch up the depth map in areas that are clearly at the wrong depths (You don't have to touch up all of them, only the ones that produce weird artifacts when the gif is animated.) In the paint mode, I click on "Level" so that I can easily change the depths (It eye-drops the color where you first click on, in other words, it kinda enables you to paint at the right depth without having to eye-drop first.) By clicking on "Preview", you can see the effect of any change you made. If you made a mistake, it's easy to go back by clicking on "Undo". It's pretty primitive but it kinda works. Note that even if the depth map is 100% correct, the animation may still look weird because of the way depthy.me inpaints the areas that become visible. There's not much you can do about that since you would need the right image and the right depth map to more accurately inpaint.
Touched up (in depthy.me) depth map.
Animation produced by depthy.me. I chose "Large", "2s", "Horizontal", "Dramatic", "Near" for the gif creation options.
Subscribe to:
Posts (Atom)