Replies: 1 comment · 4 replies
-
Hi @Servo97 , have you considered doing an |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi, very sorry for the late reply.
I believe there are posts that are doing similar stuff e.g. this one. But IIUC, most of them are using the rendering pipeline for generating a video of the scene. I want to use the rendered images, preprocess them, and use them as observations for my RL pipeline. Do you have any suggestions for fixing this? Thanks! |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi @Servo97 , See #1202 |
Beta Was this translation helpful? Give feedback.
All reactions
-
Ahhh I see! Thanks for the prompt response! I will check out ray casting! |
Beta Was this translation helpful? Give feedback.
All reactions
-
You can always save all mjx_data states across all time steps of the simulation in an array, return that from your jitted function and then have another loop at the end of your script that "replays" these states through setting the data of each time step with renderer.update_scene(mj_data) followed by frames.append(renderer.render()) and finally a media.write_video('simulation_mjx.mp4', frames, fps=framerate) |
Beta Was this translation helpful? Give feedback.
All reactions
This discussion was converted from issue #1420 on March 11, 2024 17:33.
-
Hi,
I'm a PhD student and I'm trying to use MuJoCo for a multi agent environment with a camera.
I can easily get a camera image using the traditional pipeline of mujoco (src)
I've been following the tutorial here, but I'm unsure as how to get similar "rgb_pixels" in the MJX pipeline.
Can someone please help me with this?
Beta Was this translation helpful? Give feedback.
All reactions