Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

animation API errors #263

Open
dcsan opened this issue Nov 20, 2023 · 7 comments
Open

animation API errors #263

dcsan opened this issue Nov 20, 2023 · 7 comments

Comments

@dcsan
Copy link

dcsan commented Nov 20, 2023

I'm using the animation API and getting

TypeError: 'NoneType' object is not subscriptable

here:
https://github.com/Stability-AI/stability-sdk/blob/main/src/stability_sdk/animation.py#L217-L219

def cv2_to_pil(cv2_img: np.ndarray) -> Image.Image:
    """Convert a cv2 BGR ndarray to a PIL Image"""
    return Image.fromarray(cv2_img[:, :, ::-1])

I guess this is just a symptom that the animation API isn't returning any frames.

I've tried with various model versions:

    # "model": "stable-diffusion-v1-5",
    # "model": "stable-diffusion-512-v2-1",  # have to force this
    "model": "stable-diffusion-xl-1024-v0-9",  # have to force this
    # "model": "stable-diffusion-xl-1024-v1-0",  # have to force this

wondering if the API is currently having downtime or how else I could debug this?

@pharmapsychotic
Copy link
Member

Could you post the callstack? It may be from being unable to read next frame from the input video file. Can you try with other input videos? Init video is working for me on stable-diffusion-512-v2-1 (though I initially had problem with all black frames with previous frame strength 1.0)

@dcsan
Copy link
Author

dcsan commented Nov 22, 2023

these are the current results. it's failing on the first frame within the tdqm loop, as no images are being returned I guess.

same with this model:
model: 'stable-diffusion-512-v2-1'

is there some compatibility with the new model and some settings maybe? I recall some settings crashed certain models before.

venv/bin/python3 animai_process.py runone assets/templates/base-48.json
stamp: 1122-051344
jobid j-1122-051344

-- [pid 48346] [svc/api] job metadata {
    "userId": "user01",
    "requestId": "request01",
    "jobid": "j-1122-051344"
}
using model: stable-diffusion-xl-1024-v0-9

-- [pid 48346] [replacer] ---- detection ----

-- [pid 48346] [replacer] prompts before: {
    "0": " portrait of a young man for the cover of a graphic novel in a floating city a merchant space station by Jean Giraud Moebius, the Incal, Syd Mead"
}


-- [pid 48346] [replacer] orig_image "./assets/people/the-rock-9.jpg"


-- [pid 48346] [replacer] prompts after: {
    "0": " portrait of a young man for the cover of a graphic novel in a floating city a merchant space station by Jean Giraud Moebius, the Incal, Syd Mead"
}
completed downloading files
image: ./outputs/j-1122-051344/init_image.png
video: ./outputs/j-1122-051344/init_video.png
using video_init_path: ./outputs/j-1122-051344/init_video.png
using init_image: ./outputs/j-1122-051344/init_image.png
safe_image: ./outputs/j-1122-051344/init_image.png

animation_settings:
{
  width: 512
  height: 512
  cfg_scale: 12
  sampler: 'DDIM'
  model: 'stable-diffusion-xl-1024-v0-9'
  seed: -1
  clip_guidance: 'None'
  init_image: './outputs/j-1122-051344/init_image.png'
  init_sizing: 'cover'
  mask_path: ''
  mask_invert: false
  animation_mode: 'Video Input'
  max_frames: 50
  border: 'replicate'
  noise_add_curve: '0:(0.02)'
  noise_scale_curve: '0:(1.04)'
  strength_curve: '0:(1), 10:(0.3), 30:(0.3), 45:(1)'
  steps_curve: '0:(20)'
  steps_strength_adj: true
  interpolate_prompts: true
  inpaint_border: false
  locked_seed: false
  angle: '0:(0)'
  zoom: '0:(1)'
  translation_x: '0:(0)'
  translation_y: '0:(0)'
  translation_z: '0:(-0.2)'
  rotation_x: '0:(0)'
  rotation_y: '0:(0)'
  rotation_z: '0:(0)'
  diffusion_cadence_curve: '0:(2)'
  cadence_interp: 'rife'
  cadence_spans: false
  color_coherence: 'LAB'
  brightness_curve: '0:(1.0)'
  contrast_curve: '0:(1.0)'
  hue_curve: '0:(0.0)'
  saturation_curve: '0:(1.0)'
  lightness_curve: '0:(0.0)'
  depth_model_weight: 0.3
  near_plane: 200
  far_plane: 10000
  fov_curve: '0:(25)'
  save_depth_maps: false
  depth_blur_curve: '0:(0.0)'
  depth_warp_curve: '0:(1.0)'
  video_init_path: './outputs/j-1122-051344/init_video.png'
  extract_nth_frame: 5
  video_mix_in_curve: '0:(1), 10:(0.3), 30:(0.3), 45:(1)'
  video_flow_warp: false
  vr_mode: false
  vr_eye_angle: 0.5
  vr_eye_dist: 5.0
  vr_projection: -0.4
  resume_timestring: ''
  override_settings_path: ''
  non_inpainting_model_for_diffusion_frames: false
  do_mask_fixup: false
  save_inpaint_masks: false
  mask_min_value: '0:(0.0)'
  fps: 20
  camera_type: 'perspective'
  image_render_method: 'mesh'
  image_render_points_per_pixel: 8
  image_render_point_radius: 0.006
  image_max_mesh_edge: 0.1
  mask_render_points_per_pixel: 4
  mask_render_point_radius: 0.0045
  mask_max_mesh_edge: 0.04
  verbose: true
  accumulate_xforms: false
  midas_weight: 0.3
  negative_prompt: 'blurry, bad hands, ugly, wrinkles'
  negative_prompt_weight: -1
}


-- [pid 48346] [render] prompts:  {
    "0": " portrait of a young man for the cover of a graphic novel in a floating city a merchant space station by Jean Giraud Moebius, the Incal, Syd Mead"
}

[pid 48346] ----- [render] start animating -----
'[pid 48346] [jobid j-1122-051344]'
[pid 48346] [jobid j-1122-051344]:   0%|                                                                                   | 0/50 [00:00<?, ?it/s]

#### ERROR [render] unknown exception: ####
TypeError("'NoneType' object is not subscriptable")
Traceback (most recent call last):
  File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 164, in animate
    for frame_idx, frame in enumerate(tqdm(
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
    self.inpaint_mask = self.transform_video(frame_idx)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
    video_next_frame = cv2_to_pil(video_next_frame)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
    return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
render error 'NoneType' object is not subscriptable
Traceback (most recent call last):
  File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 25, in <module>
    main()
  File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 18, in main
    runner.runone(opt)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/runner.py", line 40, in runone
    process_job(job)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/services/api.py", line 128, in process_job
    result = render.render_job(jobid, settings, prompt)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 20, in render_job
    raise ex
  File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 14, in render_job
    result = animate(jobid, settings, prompt)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 185, in animate
    raise ex
  File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 164, in animate
    for frame_idx, frame in enumerate(tqdm(
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
    self.inpaint_mask = self.transform_video(frame_idx)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
    video_next_frame = cv2_to_pil(video_next_frame)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
    return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
error: Recipe `runone` failed on line 97 with exit code 1

@dcsan
Copy link
Author

dcsan commented Nov 22, 2023

I confirmed animation is working with your default settings from python3 -m stability_sdk animate --gui
so perhaps it's something in the config or animation settings for the new models.

@dcsan
Copy link
Author

dcsan commented Nov 22, 2023

OK i think the issue is that you have removed support for using a JPG/PNG as a video init image.

this crashes:

      "animation_mode": "Video Input",
      "video_init_path": "./assets/people/the-rock-9.jpg",

if i change the input video to an MP4 file I can get it to render.

      "video_init_path": "./assets/pongs/alex-shin.mp4",

can you confirm if this is expected behavior now?

this is not a big difference, just inconvenient, we'll have to ffmpeg convert all the inputs first.

maybe I can still use an older commit of the animation extension?

@pharmapsychotic
Copy link
Member

Ah interesting, nice work narrowing that down. The code in the SDK for this didn't change so it may have been an update to the opencv-python-headless library which broke this. If 4.7.0.72 still works we can pin the version at that.

@dcsan
Copy link
Author

dcsan commented Nov 22, 2023

I tried 4.7.0.72 and failing that also went back a few versions to

opencv-python==4.6.0.66
opencv-python-headless==4.6.0.66

but still getting the error.

repro:

git clone --recurse-submodules https://github.com/Stability-AI/stability-sdk

# modified the `setup.py`

        'anim': [
            'keyframed',
            'numpy',
            'opencv-python-headless==4.6.0.66',  ## <== pinned this
        ],

pip install "./stability-sdk[anim]"
# manually install base opencv @ version
pip install opencv-python==4.6.0.66

$ pip freeze | grep cv
opencv-python==4.6.0.66
opencv-python-headless==4.6.0.66

gives

[pid 58469] ----- [render] start animating -----
'[pid 58469] [jobid j-1122-074438]'
[pid 58469] [jobid j-1122-074438]:   0%|                                                                                   | 0/50 [00:00<?, ?it/s]

#### ERROR [render] unknown exception: ####
TypeError("'NoneType' object is not subscriptable")
Traceback (most recent call last):
  File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 165, in animate
    for frame_idx, frame in enumerate(tqdm(
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
    self.inpaint_mask = self.transform_video(frame_idx)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
    video_next_frame = cv2_to_pil(video_next_frame)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
    return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
render error 'NoneType' object is not subscriptable
Traceback (most recent call last):
  File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 25, in <module>
    main()
  File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 18, in main
    runner.runone(opt)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/runner.py", line 40, in runone
    process_job(job)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/services/api.py", line 128, in process_job
    result = render.render_job(jobid, settings, prompt)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 20, in render_job
    raise ex
  File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 14, in render_job
    result = animate(jobid, settings, prompt)
  File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 186, in animate
    raise ex
  File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 165, in animate
    for frame_idx, frame in enumerate(tqdm(
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
    self.inpaint_mask = self.transform_video(frame_idx)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
    video_next_frame = cv2_to_pil(video_next_frame)
  File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
    return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
error: Recipe `runone` failed on line 97 with exit code 1

so maybe it's another package?

@dcsan
Copy link
Author

dcsan commented Jan 9, 2024

I was able to get this working by using a full animation as the background video_init_path

handy ffmpeg script make a backing video from single still image
ffmpeg -y -loop 1 -framerate 1 -i {{imagepath}} -t 50 -c:v libx264 -r 10 {{output}}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants