New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Video flickers severely on my own data #41
Comments
Hi there, thanks for your detailed description. It's a great ComfyUI Wrapper for Champ. But for the |
Thank you very much for your response, but I couldn't find the code related to SMPL & Rendering in this project. However, I found this link in another issue: Champ GitHub Repository. Following the instructions of this project, I completed the Fit SMPL and Transfer SMPL steps. But when I was processing Rendering with Blender, I encountered an error:
|
Additionally, I found a potential error in the file transfer_smpl.py at line 60, where it states: smooth_smpl_path = os.path.join(driving_folder, "smpl_results", "smpls_group.npz") The actual path should be: smooth_smpl_path = os.path.join(driving_folder, "smpls_group.npz") After the Fit SMPL step, the |
Thank you for your prompt response. The third step "Rendering" has been successfully executed. I downgraded Blender to version 3.5 and installed additional dependency packages within Blender, and now it runs successfully. However, I feel it's a bit slow, and the GPU utilization is not high either. Are there any other acceleration plans or solutions? Is multi-GPU support available? |
please tell me how to change blender to 3.5 in linux 。thanks |
@lxwkobe24
Please note that the command provided is an example and the channel name |
i try , look this error . |
This appears to be a permission error; ensure that you have sufficient privileges to access the |
it's very strange! |
@WyHy Yeah this blender script is a little bit slow, even with GPU. I am wondering if that's caused by slow I/O stream. We met the same problems when we rendered through the whole dataset. What's the resolution of your video? I suggest to downscale it to around 500. |
I am using a landscape video with a resolution of 2084x1080. Once this run is finished, I will try to see if reducing the resolution can improve the processing speed. I am very much looking forward to seeing whether the use of SMPL and Blender effectively improves the flickering issue in the video. 😊 |
I apologize, but I am unable to assist with this issue as I am also new to Blender. I suggest you visit the official forums to seek a solution. |
|
@Leoooo333
animation.23.06.36.mp4 |
|
For better results, I recommend using a half-body portrait with a front face as the reference image. We are going to solve the face problem(like the one shown in your full-body example) in the following version. Keep an eye on Champ🤸♂️ |
please tell me how to download pandas in blender ? |
Hello,I‘m still confused what caused the flickers in the background? Do you know the reasons and how to solve it? |
|
My solution was to write an installation script called 'blender_python.py', the content of which is as follows: import sys
import subprocess
# Get the path to the Blender's Python executable
blender_python_executable = sys.executable
# Construct the command to install pandas using the system's pip
install_command = [blender_python_executable, "-m", "pip", "install", "tqdm"]
# Call the system's pip to install the package
subprocess.check_call(install_command) Then, use the Blender command to execute this installation script: blender --background --python blender_python.py |
you need install tqdm in Blender, which operates in a separate Python environment. |
see #41 (comment) |
Love You! WyHy |
I'm confused about the flickers in the background, it is reasonable that bugs in smpl module cause the jitter of foreground(human), but the flickers in the background are also caused by this ? |
Hi,Thank you very much for your great work. it's really awesome!
I successfully ran the entire project on my own dataset, but the generated results seem to flicker much more severely compared to the example data. Is there a way to stabilize the results like the examples do?
The tools I used are as follows:
0x1. To ensure the stability of the running results, I deliberately resized the ref_image and motion_data to the same dimensions. The ref_image was regenerated based on the pose from the motion data.
0x2. I obtained complete motion data based on the project at https://github.com/kijai/ComfyUI-champWrapper, and then imported the data into CHAMP to run the process:
1.1 DSINE Normal Map to obtain normal data
1.2 DWpose Estimator to obtain dwpose data
1.3 Depth Anything to obtain depth data
1.4 DensePose Estimator to obtain semantic_map data
0x3. The motion data and ref image I used are attached.
Thank you very much.
data.zip
The text was updated successfully, but these errors were encountered: