Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further development of the depth camera simulation #187

Open
Plavit opened this issue Jul 26, 2020 · 0 comments
Open

Further development of the depth camera simulation #187

Plavit opened this issue Jul 26, 2020 · 0 comments
Labels
enhancement New feature or request

Comments

@Plavit
Copy link
Contributor

Plavit commented Jul 26, 2020

Following up to issue #153 which added depth cameras in pull request #181. General information on how the depth cameras are working can be found in #153.

Maybe it would not be so computationally challenging to simulate the infrared "projected points" and then interpolate between them to get that accuracy gradient with higher precision close by and worse far away. The Intel Realsense camera has an algorithm that it can execute on-board in real time, basic info here:
https://www.intelrealsense.com/stereo-depth-vision-basics/

Sample:

import numpy
import cv2

from matplotlib import pyplot as plt
from matplotlib import cm

left  = cv2.imread("l_active.png", cv2.IMREAD_GRAYSCALE)
right = cv2.imread("r_active.png", cv2.IMREAD_GRAYSCALE)

fx = 942.8        # lense focal length
baseline = 54.8   # distance in mm between the two cameras
disparities = 128 # num of disparities to consider
block = 31        # block size to match
units = 0.512     # depth units, adjusted for the output to fit in one byte

sbm = cv2.StereoBM_create(numDisparities=disparities,
                          blockSize=block)

# calculate disparities
disparity = sbm.compute(left, right)
valid_pixels = disparity > 0

# calculate depth data
depth = numpy.zeros(shape=left.shape).astype("uint8")
depth[valid_pixels] = (fx * baseline) / (units * disparity[valid_pixels])

# visualize depth data
depth = cv2.equalizeHist(depth)
colorized_depth = numpy.zeros((left.shape[0], left.shape[1], 3), dtype="uint8")
temp = cv2.applyColorMap(depth, cv2.COLORMAP_JET)
colorized_depth[valid_pixels] = temp[valid_pixels]
plt.imshow(colorized_depth)
plt.show()

Input:
left lens
l_active
right lens
r_active

Output:
image

Source:
source.zip

This was referenced Jul 26, 2020
@SijmenHuizenga SijmenHuizenga added the enhancement New feature or request label Jul 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants