Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simplified model manager install API to InvocationContext #6132

Open
wants to merge 39 commits into
base: main
Choose a base branch
from

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Apr 4, 2024

Summary

This adds two model manager-related methods to the InvocationContext uniform API. They are accessible via context.models.*:

  1. load_and_cache_model(source: Path|str|AnyHttpURL, loader: Optional[Callable[[Path], Dict[str, Tensor]]] = None) -> LoadedModel

Load the model located at the indicated path, URL or repo_id.

This will download the model from the indicated location , cache it locally, and load it into the model manager RAM cache if needed. If the optional loader argument is provided, the loader will be invoked to load the model into memory. Otherwise the method will call safetensors.torch.load_file() or torch.load() (with a pickle scan) as appropriate to the file suffix. Diffusers models are supported via HuggingFace repo_ids.

Be aware that the LoadedModel object will have a config attribute of None.

Here is an example of usage:

def invoke(self, context: InvocatinContext) -> ImageOutput:
       model_url = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
       loadnet = context.models.load_and_cache_model(model_url)
       with loadnet as loadnet_model:
             upscaler = RealESRGAN(loadnet=loadnet_model,...)

  1. download_and_cache_model( source: str | AnyHttpUrl, access_token: Optional[str] = None, timeout: Optional[int] = 0) -> Path

Download the model file located at source to the models cache and return its Path.

This will check models/.download_cache for the desired model file and download it from the indicated source if not already present. The local Path to the downloaded file is then returned.

Other Changes

This PR performs a migration, in which it renames models/.cache to models/.convert_cache, and migrates previously-downloaded ESRGAN, openpose, DepthAnything and Lama inpaint models from the models/core directory into models/.download_cache.

There are a number of legacy model files in models/core, such as GFPGAN, which are no longer used. This PR deletes them and tidies up the models/core directory.

Related Issues / Discussions

I have systematically replaced all the calls to download_with_progress_bar(). This function is no longer used elsewhere and has been removed.

QA Instructions

I have added unit tests for the three new calls. You may test that the load_and_cache_model() call is working by running the upscaler within the web app. On first try, you will see the model file being downloaded into the models .cache directory. On subsequent tries, the model will either load from RAM (if it hasn't been displaced) or will be loaded from the filesystem.

Merge Plan

Squash merge when approved.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)

@github-actions github-actions bot added python PRs that change python files services PRs that change app services labels Apr 4, 2024
@lstein lstein force-pushed the lstein/feat/simple-mm2-api branch from 9cc1f20 to af1b57a Compare April 12, 2024 01:46
@github-actions github-actions bot added invocations PRs that change invocations backend PRs that change backend files python-tests PRs that change python tests labels Apr 12, 2024
@lstein lstein marked this pull request as ready for review April 12, 2024 05:17
@lstein
Copy link
Collaborator Author

lstein commented Apr 14, 2024

I have added a migration script that tidies up the models/core directory and removes unused models such as GFPGAN. In addition, I have renamed models/.cache to models/.convert_cache to distinguish it from the directory in which just-in-time models are downloaded into, which is models/.download_cache. While the size of models/.convert_cache is capped such that less-used models are cleared periodically, files in models/.download_cache are not removed unless the user does so manually.

@lstein lstein force-pushed the lstein/feat/simple-mm2-api branch from 537a626 to 3ddd7ce Compare April 14, 2024 19:57
@lstein lstein force-pushed the lstein/feat/simple-mm2-api branch from 3ddd7ce to fa6efac Compare April 14, 2024 20:10
Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what I was expecting the implementation to be, but it definitely wasn't as simple as this - great work.

I've requested a few changes and there's one discussion item that I'd like to marinate on before we change the public invocation API.

invokeai/app/invocations/upscale.py Outdated Show resolved Hide resolved
invokeai/app/services/shared/invocation_context.py Outdated Show resolved Hide resolved
@lstein
Copy link
Collaborator Author

lstein commented May 3, 2024

@psychedelicious I think all the issues are now addressed. Ok to approve and merge?

@psychedelicious
Copy link
Collaborator

Sorry, no we need to do the same pattern for that last processor, DWOpenPose - it uses ONNX models and I'm not familiar with how they work. I can take care of that but won't be til next week.

@lstein
Copy link
Collaborator Author

lstein commented May 3, 2024

Sorry, no we need to do the same pattern for that last processor, DWOpenPose - it uses ONNX models and I'm not familiar with how they work. I can take care of that but won't be til next week.

I'll take a look at it Friday.

@lstein
Copy link
Collaborator Author

lstein commented May 3, 2024

I've refactored DWOpenPose using the same pattern as in the other backend image processors. I also added some of the missing typehints so there are fewer red squigglies. I noticed that there is a problem with the pip dependencies. If the onnxruntime package is installed, then even if onnxruntime-gpu is installed as well, the onnx runtime won't use the GPU (see microsoft/onnxruntime#7748). You have to remove onnxruntime and then install onnxruntime-gpu. I don't think pyproject.toml provides a way for an optional dependency to remove a default dependency. Is there a workaround?

@psychedelicious
Copy link
Collaborator

I noticed that there is a problem with the pip dependencies. If the onnxruntime package is installed, then even if onnxruntime-gpu is installed as well, the onnx runtime won't use the GPU (see microsoft/onnxruntime#7748). You have to remove onnxruntime and then install onnxruntime-gpu. I don't think pyproject.toml provides a way for an optional dependency to remove a default dependency. Is there a workaround?

I think we'd need to just update the installer script with special handling to uninstall those packages if they are already installed. It's probably time to revise our optional dependency lists. I think "cuda" and "cpu" make sense to be the only two user-facing options. "xformers" is extraneous now (torch's native SDP implementation is just as fast), so it could be removed.

@psychedelicious
Copy link
Collaborator

Thanks for cleaning up the pose detector. It would be nice to use the model context so we get memory mgmt, but that is a future task.

I had some feedback from earlier about the public API that I think was lost:

  1. Token: When would a node reasonably provide an API token? We support regex-matched tokens in the config file. I don't think this should be in the invocation API.

  2. Timeout: Similarly, when could a node possibly be able to make a good determination of the timeout for a download? It doesn't know the user's internet connection speed. It's a user setting - could be globally set in the config file and apply to all downloads.

If both of those args are removed, then load_ckpt_from_path and load_ckpt_from_url look very similar. I think this is maybe what @RyanJDick was suggesting with a single load_custom_model method.

Also, will these methods work for diffusers models? If so, "ckpt" probably doesn't need to be in the name.

@lstein
Copy link
Collaborator Author

lstein commented May 5, 2024

Thanks for cleaning up the pose detector. It would be nice to use the model context so we get memory mgmt, but that is a future task.

The onnxruntime model loading architecture seems to be very different from what the model manager expects. In particular, the onnxruntime.InferenceSession() constructor doesn't seem to provide any way to accept a model that has been read into RAM or VRAM. The closest I can figure is that you can pass the constructor an IOBytes object to a serialized version of the model in memory. This will require some architectural changes in the model manager that should be its own PR.

  1. Token: When would a node reasonably provide an API token? We support regex-matched tokens in the config file. I don't think this should be in the invocation API.

Right now the regex token handling is done in a part of the install manager that is not called by the simplifed API. I'll move this code into the core download() routine so that tokens are picked up whenever a URL is requested.

  1. Timeout: Similarly, when could a node possibly be able to make a good determination of the timeout for a download? It doesn't know the user's internet connection speed. It's a user setting - could be globally set in the config file and apply to all downloads.

I think you're saying this should be a global config option and I agree with that. Can we get the config migration code in so that I have a clean way of updating the config?

Also, will these methods work for diffusers models? If so, "ckpt" probably doesn't need to be in the name.

Not currently. It only works with checkpoints. I'd planned to add diffusers support later, but I guess I should do that now. Converting to draft.

@lstein lstein marked this pull request as draft May 5, 2024 23:04
@psychedelicious
Copy link
Collaborator

psychedelicious commented May 5, 2024

Probably doesn't make sense to spend time on the onnx loading. This is the only model that uses it.

Right now the regex token handling is done in a part of the install manager that is not called by the simplifed API. I'll move this code into the core download() routine so that tokens are picked up whenever a URL is requested.

Sounds good.

I think you're saying this should be a global config option and I agree with that. Can we get the config migration code in so that I have a clean way of updating the config?

I don't think any migration is necessary - just add a sensible default value, maybe it should be 0 (no timeout). I'll check back in on the config migration PR this week.

Not currently. It only works with checkpoints. I'd planned to add diffusers support later, but I guess I should do that now. Converting to draft.

Ok, thanks.

@lstein
Copy link
Collaborator Author

lstein commented May 8, 2024

The onnxruntime model loading architecture seems to be very different from what the model manager expects. In particular, the onnxruntime.InferenceSession() constructor doesn't seem to provide any way to accept a model that has been read into RAM or VRAM. The closest I can figure is that you can pass the constructor an IOBytes object to a serialized version of the model in memory. This will require some architectural changes in the model manager that should be its own PR.

I've played with this a bit. It is easy to load the openpose onnx sessions into the RAM cache and they will run happily under the existing MM cache system. However, Onnx sessions do their own internal VRAM/CUDA management, and so I found that for the duration of the time that the session object is in RAM, it holds on to a substantial chunk of VRAM (1.7GB). The openpose session is only used during conversion of an image into a pose model, and I think it's better to have slow disk-based loading of the openpose session than to silently consume a chunk of VRAM that interferes with later generation.

@github-actions github-actions bot added the api label May 9, 2024
@github-actions github-actions bot added the docs PRs that change docs label May 18, 2024
@lstein lstein marked this pull request as ready for review May 18, 2024 02:33
@lstein
Copy link
Collaborator Author

lstein commented May 18, 2024

@psychedelicious This is ready for your review now. There are now just two calls: load_and_cache_model() and download_and_cache_model() which return a locally cached Path and LoadedModel respectively. In addition, the model source can now be a URL, a local Path, or a repo_id. Support for the latter involved my refactoring the way that multifile downloads work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api backend PRs that change backend files docs PRs that change docs invocations PRs that change invocations python PRs that change python files python-tests PRs that change python tests services PRs that change app services
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[bug]: CUDA out of memory error when upscaling x4 (or x2 twice)
3 participants