-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apply to RGB monocular movie #56
Comments
To run our code with your custom data, you'll need to implement a dataset class
If you have gt poses, you can set the poses to be them. |
Thank you very much for your prompt reply and sorry for my late reply.
|
Hi,
Best, |
Thank you for your reply! I have set eval_rendering = True, then Run with --eval. I got the error that umeyama_alignment doesn't work and therefore no .ply produced at save_folder. Seems to me the cause is groundtruth setting. Could you tell me how to treat groundtruth value (as well as accelerometer.txt), in case I don't have them produced by monocular camera? (like iPhone movie for example) I hope this terminal output helps you to understand my problem.
FYI. output with print func for error-related variable value to check the cause of error. It looks like pose_gt is cause of problem, letting y and mean_y to "0"and umeyayama_alignment error.
|
The easiest fix should be just to comment out eval_ate function. Lines 349 to 356 in 37cebb4
|
Thanks for reply. |
Of course here as well. Lines 468 to 474 in 37cebb4
|
@muskie82 |
OK, then where the umeyama alignment is called? |
it's here. |
I mean, where in the MonoGS code it is called. |
@RickMaruyama Hi, can I ask how you succeeded in viewing the rendering GUI and implementing dataset.py? Thanks :) |
@langsungn |
@RickMaruyama Thank you for replying. I'm okay with a few issues, but would you share your modified 'dataset.py' file with me? Thanks a lot! |
@RickMaruyama Can you share your modified 'dataset.py' file with me?Thanks ! |
@langsungn @leichaoyang
|
@langsungn |
@muskie82
Thank you very much for your great contribution to visual SLAM!!
I have a question.
How can your code apply to RGB monocular movie data?
I take a RGB monocular movie, and want to render it as ply format.
【detail】
I have reproduced the code on AWS EC2 (Ubuntu20.4 / A10 GPU) and successfully can get the GUI with demo dataset. (following code)
python slam.py --config configs/mono/tum/fr3_office.yaml
Then I samely tried to use my own movie data which is Monocular and produce RGB only. I inputted the RGB image and color info as rgb.txt and did same code.
then the terminal requests me to give depth.txt.
I confirmed even depth.txt is added in the right folder and tried again the run slam.py, it still then need “pose_list”, probably related to accelerometer.txt and ground truth.txt.
How can I get them just with RGB camera?
So my question again is what is procedure to test on real data created by Monocular RGB camera?
Thank you for your advice in advance.
The text was updated successfully, but these errors were encountered: