Saturday, February 6, 2021

3d photo inpainting using Artificial Intelligence (AI)

As you probably know, when you have an image and its associated depth map, whenever the point of view changes, areas in the background get disoccluded, that is, they become visible. If you are a fan of Facebook 3d photos, you may have observed that these disoccluded areas get blurred. Some people (not I) are not too keen on this effect and would prefer to see the background magically appears out of thin air. Well, apparently, AI (Artifical Intelligence) can take care of that. So, not only can AI generate depth maps from single images, it can also fill the disoccluded areas. Pretty neat, I must say, if the results are up to the hype.

This paper: "3D Photography using Context-aware Layered Depth Inpainting" by Meng-Li Shih promises that inpainting can be done realistically with AI. There's a Google Colab for it, which means we can check it out right there in the browser thanks to Google wthout installing anything and without the need for a gpu card. In the google colab implementation, they use MiDaS to get a depth map from a given reference image and then do extreme inpainting using AI. The output of 3d photo inpainting is the MiDaS depth map, a point cloud of the 3d scene and four videos that kinda show off the inpainting (2 of the zoom type a la Ken Burns and two of the wiggle/wobble type). To visualize the point cloud which is in the ply format, you can use Meshlab or CloudCompare (preferred). Note that the depth map doesn't need to be coming from MiDaS, you can certainly use your own depth map (although you may have to blur it).

Here's a video that explains how to run the Google Colab python notebook. First, I let the software use MiDaS to create the depth map. Then, I bypass MiDaS and use my own depth map which I created with SPM:



If you use your own depth map, make sure that it is grayscale and that it is smooth enough. If your depth map is not smooth, it's going to take forever and google colab might disconnect you before the videos are created. I explain all that in the video.

We all know that MiDaS can create great depth maps from single images. Check this post if you are not yet convinced: Getting depth maps from single images using Artificial Intelligence (AI). It's the inpainting we were not too sure about... until now. I've gotta say that the filling of occlusions looks quite realistic even when the point of view changes drastically. That AI is really doing wonders and it will only get better as the data sets used to train the neural networks get bigger.

6 comments:

  1. I like your post it looks very interesting so keep posting in future

    ReplyDelete
  2. Hi, we love your work. I was suddenly wondering : what if we exctract a stereopair from a video ?? Micheal Brown from youtube made me realise how cheap the Ai depth estimation was compared to a real stereo pair. meanwhile , most of what we have ib picture now we have also in video , so wouldn't be possible to extract the steropair from a video somehow ?

    ReplyDelete
    Replies
    1. if the subject is static, you can extract pretty much any 2 images from a video as long as it is pointed at the subject. You align/rectify the 2 images using something like ER9b and that gives you a stereo pair. It's a bit like photogrammetry but restricted to 2 just images.

      Delete
  3. This is really an awsome blog.Thank you for your time and effort. Keep posting.

    ReplyDelete

  4. As a blog reader, I'm fascinated by the concept of 3D photo inpainting using AI. It's intriguing to see how AI can handle the disoccluded areas in images, offering an alternative to the commonly observed blurring effect. The paper you mentioned, "3D Photography using Context-aware Layered Depth Inpainting," sounds promising.

    What's particularly convenient is the availability of a Google Colab for experimenting with this AI-based inpainting technique directly in a browser without GPU card requirements. It's great that you can use your own depth map or MiDaS for the process. However, to enhance the blog's visual appeal and help readers grasp the concept better, including 3d rendering or before-and-after images showcasing the inpainting results would be beneficial. Visual demonstrations can make complex AI concepts more accessible to a broader audience.

    ReplyDelete
  5. Hello Ugo, unfortunately I cannot run your detailed tutorial in Colab.
    Just seconds after clicking on 'Runtimes' and 'Run all', this error message appears:
    'Looking in links: https://download.pytorch.org/whl/torch_stable.html
    ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cu100 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115 , 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5 .0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1 +rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0 +rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1 +rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0 .0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi .cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2, 2.1.0, 2.1.0+cpu, 2.1.0+cpu.cxx11.abi, 2.1.0+cu118 , 2.1.0+cu121, 2.1.0+cu121.with.pypi.cudnn, 2.1.0+rocm5.5, 2.1.0+rocm5.6, 2.1.1, 2.1.1+cpu, 2.1.1+cpu .cxx11.abi, 2.1.1+cu118, 2.1.1+cu121, 2.1.1+cu121.with.pypi.cudnn, 2.1.1+rocm5.5, 2.1.1+rocm5.6, 2.1.2, 2.1 .2+cpu, 2.1.2+cpu.cxx11.abi, 2.1.2+cu118, 2.1.2+cu121, 2.1.2+cu121.with.pypi.cudnn, 2.1.2+rocm5.5, 2.1.2 +rocm5.6, 2.2.0, 2.2.0+cpu, 2.2.0+cpu.cxx11.abi, 2.2.0+cu118, 2.2.0+cu121, 2.2.0+rocm5.6, 2.2.0+rocm5 .7)
    ERROR: No matching distribution found for torch==1.4.0+cu100
    ERROR: Could not find a version that satisfies the requirement opencv-python==4.2.0.32 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47 , 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1 .48, 4.5 .3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66, 4.7.0.68, 4.7.0.72, 4.8.0.74, 4.8.0.76, 4.8.1.78, 4.9.0.80)
    ERROR: No matching distribution found for opencv-python==4.2.0.32
    Collecting vispy==0.6.4
    Using cached vispy-0.6.4.tar.gz (13.3 MB)
    error: subprocess-exited-with-error'
    At the end when executing the script you get the message:
    Traceback (most recent call last):
    File '/content/3d-photo-inpainting/main.py', line 6, in
    import vispy
    ModuleNotFoundError: No module named 'vispy'

    What could I do in this case?

    ReplyDelete