Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data #23

Open
herochen7372 opened this issue Jan 21, 2024 · 13 comments
Open

data #23

herochen7372 opened this issue Jan 21, 2024 · 13 comments

Comments

@herochen7372
Copy link

03aa2d357818a49a30d598e4745ac65

Can you tell me how I can match the 3D coordinates in the json file with the image in human36M?

@Zhuyue0324
Copy link
Collaborator

We don't use this image path by default, but using our data_preparation.py script to compile image from video.

However, you can check the closed issue #12 for how to convert between our paths to your format.

@Zhuyue0324
Copy link
Collaborator

Screenshot 2024-01-24 at 14-54-25 File Browser

@herochen7372
Copy link
Author

Sorry,I mean this IDEA-Research/OSX#108

@Zhuyue0324
Copy link
Collaborator

This is the original site where I get the image data from. I don't know what is the act 16 (and 17 if there is one in your list)

@herochen7372
Copy link
Author

Ok,thank you!

@Zhuyue0324
Copy link
Collaborator

Zhuyue0324 commented Jan 24, 2024

In worst case, you can just ignore the act 16 and try matching annotations of the 15 actions we provide. (Though I am not sure if your act 1-15 are exactly the actions we provide, you need check by yourself)

@herochen7372
Copy link
Author

Ok,thank you,i will try matching 02-16 to 15 actions you provide.

@herochen7372
Copy link
Author

I compare annotation of osx' Human36M and your H3WD,and find that they have different frames. Your video frame annotations are not continuous, e.g. there is no S1/Images/Directions.60457274/frame_0001.jpg.

@Zhuyue0324
Copy link
Collaborator

No, they are not continuous. First I only compile 1 from each 5 frames from video. Then during the quality assessment step we choose to form only 100k sample triplets out of all samples we compiled with the smallest errors according to our 4-corner error metric.

@herochen7372
Copy link
Author

Ok,thank you!

@Z-mingyu
Copy link

No, they are not continuous. First I only compile 1 from each 5 frames from video. Then during the quality assessment step we choose to form only 100k sample triplets out of all samples we compiled with the smallest errors according to our 4-corner error metric.

hello ,thanks for your dataset. how can I get the image you used in the dataset

@Zhuyue0324
Copy link
Collaborator

I first downloaded the video data from their original site http://vision.imar.ro/human3.6m/description.php
Then I use the code in data_preparation.py to convert mp4 into images

@Z-mingyu
Copy link

I first downloaded the video data from their original site http://vision.imar.ro/human3.6m/description.php Then I use the code in data_preparation.py to convert mp4 into images

I found the frame_id in the dataset, so I can get the image you selected

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants