Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get started with new images? #19

Open
Codeguyross opened this issue Jan 9, 2020 · 13 comments
Open

How to get started with new images? #19

Codeguyross opened this issue Jan 9, 2020 · 13 comments

Comments

@Codeguyross
Copy link

Hello,

If I wanted to start with fresh set of images to use this code to generate a mesh.ply file, what steps do I need to do to get the new images ready? I'm new to this area of study so any help getting going is appreciated!!

Thanks!

@plutoyuxie
Copy link

Hi, the pairs of color images and depth images should be aligned.
You need to caculate the camera pose for each frame. ICP may be helpful.

@YJonmo
Copy link

YJonmo commented Apr 16, 2020

Hi, the pairs of color images and depth images should be aligned.
You need to caculate the camera pose for each frame. ICP may be helpful.

I used Kinect Fusion to align the depth images and got the camera pose form there. But when I use TSDF with the pose and depth it just doesn't work.

@Viky397
Copy link

Viky397 commented Jan 25, 2021

Hi! I have my .pcd point clouds saved, and I use ICP to calculate the poses. However, running tsdf.py, the output mesh looks quite odd and elongated. Any tips would be welcome, thank you!

@YJonmo
Copy link

YJonmo commented Jan 25, 2021

Maybe try to scale your pose info, i.e., divide it by a 2 or 3 ..

@Viky397
Copy link

Viky397 commented Feb 1, 2021

Hello. I tried dividing by 2 or 3 but this just makes the output mesh look more sparse, and doesnt help with the oddly elongated output shape.

@YJonmo
Copy link

YJonmo commented Feb 1, 2021

Did you divide the translation?
Only the translation should be divided or multiplied by a scaling coefdicient.
Keep in mind, the x axis is rightward, y is downward and z is outward.

@Viky397
Copy link

Viky397 commented Feb 4, 2021

I used the following to get the transformation:

reg_p2p = o3d.registration.registration_icp(
            source, target, threshold, trans_init,
            o3d.registration.TransformationEstimationPointToPoint())

And then I save the following to a .txt file, and divide by a scaling factor:
reg_p2p.transformation/3

@YJonmo
Copy link

YJonmo commented Feb 4, 2021

I guess the reg_p2p.transformation is a 4*4 transformation matrix?

You only need to divide the last column of this matrix by a coefficient. The last column is a the translation:
reg_p2p.transformation[0:3,-1] = reg_p2p.transformation[0:3,-1]/Coeff

In my case, I chose changes in the movement when it was no rotation. Then I focused on finding the correct dividing coefficient.

@Viky397
Copy link

Viky397 commented Feb 5, 2021

Hm still seems very odd. This is the resulting mesh.ply.
image

@YJonmo
Copy link

YJonmo commented Feb 8, 2021

It is hard to say what is going on just by looking at the image.
Also give some off set to your depth values. For example all the depth values should be added by some constant value. If you maximum depth is 255, then add 50 to the value of each pixel and see how it looks like. Play with the numbers.

@Ademord
Copy link

Ademord commented Jun 18, 2021

@Viky397 did you find a solution to this?

@Viky397
Copy link

Viky397 commented Jun 18, 2021

@Ademord nope. I switched to Kimera-Semantics instead. Very easy repo to work with in ROS :)

@Ademord
Copy link

Ademord commented Jun 18, 2021

@Viky397 ahhh im working in unity so i need to do this by hand technically :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants