Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simplified model manager install API to InvocationContext #6132

Merged
merged 60 commits into from
Jun 8, 2024
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
9cc1f20
add simplified model manager install API to InvocationContext
Apr 4, 2024
af1b57a
add simplified model manager install API to InvocationContext
Apr 4, 2024
df5ebdb
add invocation_context.load_ckpt_from_url() method
Apr 12, 2024
3a26c7b
fix merge conflicts
Apr 12, 2024
41b909c
port dw_openpose, depth_anything, and lama processors to new model do…
Apr 13, 2024
3ddd7ce
change names of convert and download caches and add migration script
Apr 14, 2024
34438ce
add simplified model manager install API to InvocationContext
Apr 4, 2024
c140d3b
add invocation_context.load_ckpt_from_url() method
Apr 12, 2024
3ead827
port dw_openpose, depth_anything, and lama processors to new model do…
Apr 13, 2024
fa6efac
change names of convert and download caches and add migration script
Apr 14, 2024
f055e1e
Merge branch 'lstein/feat/simple-mm2-api' of github.com:invoke-ai/Inv…
Apr 15, 2024
f1e79d5
Merge branch 'main' into lstein/feat/simple-mm2-api
Apr 15, 2024
470a399
fix merge conflicts with main
Apr 15, 2024
34cdfc6
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein Apr 17, 2024
d72f272
Address change requests in first round of PR reviews.
Apr 25, 2024
70903ef
refactor load_ckpt_from_url()
Apr 28, 2024
bb04f49
Merge branch 'main' into lstein/feat/simple-mm2-api
Apr 28, 2024
a26667d
make download and convert cache keys safe for filename length
Apr 28, 2024
7c39929
support VRAM caching of dict models that lack `to()`
Apr 28, 2024
f65c7e2
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein Apr 28, 2024
57c8314
fix safe_filename() on windows
Apr 28, 2024
fcb071f
feat(backend): lift managed model loading out of lama class
psychedelicious Apr 28, 2024
1fe90c3
feat(backend): lift managed model loading out of depthanything class
psychedelicious Apr 28, 2024
49c84cd
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein Apr 30, 2024
3b64e7a
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein May 3, 2024
38df6f3
fix ruff error
May 3, 2024
e9a2005
refactor DWOpenPose and add type hints
May 3, 2024
8e5e9b5
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein May 4, 2024
f211c95
move access token regex matching into download queue
lstein May 6, 2024
b48d4a0
bad implementation of diffusers folder download
lstein May 9, 2024
0bf14c2
add multifile_download() method to download service
lstein May 13, 2024
287c679
clean up type checking for single file and multifile download job cal…
May 13, 2024
f29c406
refactor model_install to work with refactored download queue
May 14, 2024
911a244
add tests for model install file size reporting
May 16, 2024
2dae5eb
more refactoring; HF subfolders not working
May 17, 2024
d968c6f
refactor multifile download code
May 18, 2024
8aebc29
fix test to run on 32bit cpu
May 18, 2024
e77c7e4
fix ruff error
May 18, 2024
987ee70
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein May 18, 2024
34e1eb1
merge with main and resolve conflicts
May 28, 2024
cd12ca6
add migration_11; fix typo
May 28, 2024
ead1748
issue a download progress event when install download starts
May 28, 2024
2276f32
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein Jun 2, 2024
132bbf3
tidy(app): remove unnecessary changes in invocation_context
psychedelicious Jun 2, 2024
e3a70e5
docs(app): simplify docstring in invocation_context
psychedelicious Jun 2, 2024
b124440
tidy(mm): move `load_model_from_url` from mm to invocation context
psychedelicious Jun 2, 2024
ccdecf2
tidy(nodes): cnet processors
psychedelicious Jun 2, 2024
521f907
tidy(nodes): infill
psychedelicious Jun 2, 2024
6cc6a45
feat(download): add type for callback_name
psychedelicious Jun 3, 2024
c58ac1e
tidy(mm): minor formatting
psychedelicious Jun 3, 2024
aa9695e
tidy(download): `_download_job` -> `_multifile_job`
psychedelicious Jun 3, 2024
9941325
tidy(mm): pass enum member instead of string
psychedelicious Jun 3, 2024
c7f22b6
tidy(mm): remove extraneous docstring
psychedelicious Jun 3, 2024
e7513f6
docs(mm): add comment in `move_model_to_device`
psychedelicious Jun 3, 2024
a9962fd
chore: ruff
psychedelicious Jun 3, 2024
f81b8bc
add support for generic loading of diffusers directories
Jun 4, 2024
9f93796
ruff fixes
Jun 4, 2024
dc13493
replace load_and_cache_model() with load_remote_model() and load_loca…
Jun 6, 2024
fde58ce
Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm…
psychedelicious Jun 7, 2024
7d19af2
Merge branch 'main' into lstein/feat/simple-mm2-api
lstein Jun 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
45 changes: 23 additions & 22 deletions invokeai/app/invocations/controlnet_image_processors.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):

image: ImageField = InputField(description="The image to process")

def run_processor(self, image: Image.Image) -> Image.Image:
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
# superclass just passes through image without processing
return image

Expand All @@ -148,7 +148,7 @@ def load_image(self, context: InvocationContext) -> Image.Image:
def invoke(self, context: InvocationContext) -> ImageOutput:
raw_image = self.load_image(context)
# image type should be PIL.PngImagePlugin.PngImageFile ?
processed_image = self.run_processor(raw_image)
processed_image = self.run_processor(raw_image, context)

# currently can't see processed image in node UI without a showImage node,
# so for now setting image_type to RESULT instead of INTERMEDIATE so will get saved in gallery
Expand Down Expand Up @@ -189,7 +189,7 @@ def load_image(self, context: InvocationContext) -> Image.Image:
# Keep alpha channel for Canny processing to detect edges of transparent areas
return context.images.get_pil(self.image.image_name, "RGBA")

def run_processor(self, image: Image.Image) -> Image.Image:
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
processed_image = get_canny_edges(
image,
self.low_threshold,
Expand All @@ -216,7 +216,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
# safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)

def run_processor(self, image: Image.Image) -> Image.Image:
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
hed_processor = HEDProcessor()
processed_image = hed_processor.run(
image,
Expand All @@ -243,7 +243,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")

def run_processor(self, image: Image.Image) -> Image.Image:
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
lineart_processor = LineartProcessor()
processed_image = lineart_processor.run(
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution, coarse=self.coarse
Expand All @@ -264,7 +264,7 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)

def run_processor(self, image: Image.Image) -> Image.Image:
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
processor = LineartAnimeProcessor()
processed_image = processor.run(
image,
Expand All @@ -291,7 +291,8 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
# TODO: replace from_pretrained() calls with context.models.download_and_cache() (or similar)
lstein marked this conversation as resolved.
Show resolved Hide resolved
midas_processor = MidasDetector.from_pretrained("lllyasviel/Annotators")
processed_image = midas_processor(
image,
Expand All @@ -318,9 +319,9 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = normalbae_processor(
processed_image: Image.Image = normalbae_processor(
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution
)
return processed_image
Expand All @@ -337,7 +338,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_d`")

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
mlsd_processor = MLSDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = mlsd_processor(
image,
Expand All @@ -360,7 +361,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
pidi_processor = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
processed_image = pidi_processor(
image,
Expand Down Expand Up @@ -388,7 +389,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
w: int = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
f: int = InputField(default=256, ge=0, description="Content shuffle `f` parameter")

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
content_shuffle_processor = ContentShuffleDetector()
processed_image = content_shuffle_processor(
image,
Expand All @@ -412,7 +413,7 @@ def run_processor(self, image):
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = zoe_depth_processor(image)
return processed_image
Expand All @@ -433,7 +434,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(
image,
Expand Down Expand Up @@ -461,7 +462,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
processed_image = leres_processor(
image,
Expand Down Expand Up @@ -503,7 +504,7 @@ def tile_resample(
np_img = cv2.resize(np_img, (W, H), interpolation=cv2.INTER_AREA)
return np_img

def run_processor(self, img):
def run_processor(self, img: Image.Image, context: InvocationContext) -> Image.Image:
np_img = np.array(img, dtype=np.uint8)
processed_np_image = self.tile_resample(
np_img,
Expand All @@ -527,7 +528,7 @@ class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)

def run_processor(self, image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(
"ybelkada/segment-anything", subfolder="checkpoints"
Expand Down Expand Up @@ -573,7 +574,7 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):

color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)

def run_processor(self, image: Image.Image):
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
np_image = np.array(image, dtype=np.uint8)
height, width = np_image.shape[:2]

Expand Down Expand Up @@ -608,8 +609,8 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)

def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
depth_anything_detector = DepthAnythingDetector(context)
depth_anything_detector.load_model(model_size=self.model_size)

processed_image = depth_anything_detector(image=image, resolution=self.resolution)
Expand All @@ -631,8 +632,8 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)

def run_processor(self, image: Image.Image):
dw_openpose = DWOpenposeDetector()
def run_processor(self, image: Image.Image, context: InvocationContext) -> Image.Image:
dw_openpose = DWOpenposeDetector(context)
processed_image = dw_openpose(
image,
draw_face=self.draw_face,
Expand Down
18 changes: 9 additions & 9 deletions invokeai/app/invocations/infill.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ class InfillImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
image: ImageField = InputField(description="The image to process")

@abstractmethod
def infill(self, image: Image.Image) -> Image.Image:
def infill(self, image: Image.Image, context: InvocationContext) -> Image.Image:
"""Infill the image with the specified method"""
pass

Expand All @@ -57,7 +57,7 @@ def invoke(self, context: InvocationContext) -> ImageOutput:
return ImageOutput.build(context.images.get_dto(self.image.image_name))

# Perform Infill action
infilled_image = self.infill(input_image)
infilled_image = self.infill(input_image, context)

# Create ImageDTO for Infilled Image
infilled_image_dto = context.images.save(image=infilled_image)
Expand All @@ -75,7 +75,7 @@ class InfillColorInvocation(InfillImageProcessorInvocation):
description="The color to use to infill",
)

def infill(self, image: Image.Image):
def infill(self, image: Image.Image, context: InvocationContext):
solid_bg = Image.new("RGBA", image.size, self.color.tuple())
infilled = Image.alpha_composite(solid_bg, image.convert("RGBA"))
infilled.paste(image, (0, 0), image.split()[-1])
Expand All @@ -94,7 +94,7 @@ class InfillTileInvocation(InfillImageProcessorInvocation):
description="The seed to use for tile generation (omit for random)",
)

def infill(self, image: Image.Image):
def infill(self, image: Image.Image, context: InvocationContext):
output = infill_tile(image, seed=self.seed, tile_size=self.tile_size)
return output.infilled

Expand All @@ -108,7 +108,7 @@ class InfillPatchMatchInvocation(InfillImageProcessorInvocation):
downscale: float = InputField(default=2.0, gt=0, description="Run patchmatch on downscaled image to speedup infill")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")

def infill(self, image: Image.Image):
def infill(self, image: Image.Image, context: InvocationContext):
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]

width = int(image.width / self.downscale)
Expand All @@ -132,16 +132,16 @@ def infill(self, image: Image.Image):
class LaMaInfillInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image using the LaMa model"""

def infill(self, image: Image.Image):
lama = LaMA()
def infill(self, image: Image.Image, context: InvocationContext):
lama = LaMA(context)
return lama(image)


@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class CV2InfillInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image using OpenCV Inpainting"""

def infill(self, image: Image.Image):
def infill(self, image: Image.Image, context: InvocationContext):
return cv2_inpaint(image)


Expand All @@ -163,5 +163,5 @@ class MosaicInfillInvocation(InfillImageProcessorInvocation):
description="The max threshold for color",
)

def infill(self, image: Image.Image):
def infill(self, image: Image.Image, context: InvocationContext):
return infill_mosaic(image, (self.tile_width, self.tile_height), self.min_color.tuple(), self.max_color.tuple())
13 changes: 4 additions & 9 deletions invokeai/app/invocations/upscale.py
lstein marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
from pathlib import Path
from typing import Literal

import cv2
Expand All @@ -10,7 +9,6 @@
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
from invokeai.backend.util.devices import TorchDevice
Expand Down Expand Up @@ -52,7 +50,6 @@ def invoke(self, context: InvocationContext) -> ImageOutput:

rrdbnet_model = None
netscale = None
esrgan_model_path = None

if self.model_name in [
"RealESRGAN_x4plus.pth",
Expand Down Expand Up @@ -95,16 +92,13 @@ def invoke(self, context: InvocationContext) -> ImageOutput:
context.logger.error(msg)
raise ValueError(msg)

esrgan_model_path = Path(context.config.get().models_path, f"core/upscaling/realesrgan/{self.model_name}")

# Downloads the ESRGAN model if it doesn't already exist
download_with_progress_bar(
name=self.model_name, url=ESRGAN_MODEL_URLS[self.model_name], dest_path=esrgan_model_path
loadnet = context.models.load_ckpt_from_url(
source=ESRGAN_MODEL_URLS[self.model_name],
)

upscaler = RealESRGAN(
scale=netscale,
model_path=esrgan_model_path,
loadnet=loadnet.model,
model=rrdbnet_model,
half=False,
tile=self.tile_size,
Expand All @@ -114,6 +108,7 @@ def invoke(self, context: InvocationContext) -> ImageOutput:
# TODO: This strips the alpha... is that okay?
cv2_image = cv2.cvtColor(np.array(image.convert("RGB")), cv2.COLOR_RGB2BGR)
upscaled_image = upscaler.upscale(cv2_image)

pil_image = Image.fromarray(cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2RGB)).convert("RGBA")

TorchDevice.empty_cache()
Expand Down
9 changes: 8 additions & 1 deletion invokeai/app/services/config/config_default.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ class InvokeAIAppConfig(BaseSettings):
patchmatch: Enable patchmatch inpaint code.
models_dir: Path to the models directory.
convert_cache_dir: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
download_cache_dir: Path to the directory that contains dynamically downloaded models.
legacy_conf_dir: Path to directory of legacy checkpoint config files.
db_dir: Path to InvokeAI databases directory.
outputs_dir: Path to directory for outputs.
Expand Down Expand Up @@ -146,7 +147,8 @@ class InvokeAIAppConfig(BaseSettings):

# PATHS
models_dir: Path = Field(default=Path("models"), description="Path to the models directory.")
convert_cache_dir: Path = Field(default=Path("models/.cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
convert_cache_dir: Path = Field(default=Path("models/.convert_cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
download_cache_dir: Path = Field(default=Path("models/.download_cache"), description="Path to the directory that contains dynamically downloaded models.")
legacy_conf_dir: Path = Field(default=Path("configs"), description="Path to directory of legacy checkpoint config files.")
db_dir: Path = Field(default=Path("databases"), description="Path to InvokeAI databases directory.")
outputs_dir: Path = Field(default=Path("outputs"), description="Path to directory for outputs.")
Expand Down Expand Up @@ -303,6 +305,11 @@ def convert_cache_path(self) -> Path:
"""Path to the converted cache models directory, resolved to an absolute path.."""
return self._resolve(self.convert_cache_dir)

@property
def download_cache_path(self) -> Path:
"""Path to the downloaded models directory, resolved to an absolute path.."""
return self._resolve(self.download_cache_dir)

@property
def custom_nodes_path(self) -> Path:
"""Path to the custom nodes directory, resolved to an absolute path.."""
Expand Down
17 changes: 13 additions & 4 deletions invokeai/app/services/model_install/model_install_default.py
Original file line number Diff line number Diff line change
Expand Up @@ -394,15 +394,19 @@ def unconditionally_delete(self, key: str) -> None: # noqa D102
rmtree(model_path)
self.unregister(key)

@classmethod
def _download_cache_path(cls, source: Union[str, AnyHttpUrl], app_config: InvokeAIAppConfig) -> Path:
model_hash = sha256(str(source).encode("utf-8")).hexdigest()[0:32]
return app_config.download_cache_path / model_hash

lstein marked this conversation as resolved.
Show resolved Hide resolved
def download_and_cache(
self,
source: Union[str, AnyHttpUrl],
access_token: Optional[str] = None,
timeout: int = 0,
) -> Path:
"""Download the model file located at source to the models cache and return its Path."""
model_hash = sha256(str(source).encode("utf-8")).hexdigest()[0:32]
model_path = self._app_config.convert_cache_path / model_hash
model_path = self._download_cache_path(source, self._app_config)

# We expect the cache directory to contain one and only one downloaded file.
# We don't know the file's name in advance, as it is set by the download
Expand Down Expand Up @@ -533,8 +537,13 @@ def on_model_found(model_path: Path) -> bool:
if resolved_path in installed_model_paths:
return True
# Skip core models entirely - these aren't registered with the model manager.
if str(resolved_path).startswith(str(self.app_config.models_path / "core")):
return False
for special_directory in [
self.app_config.models_path / "core",
self.app_config.convert_cache_dir,
self.app_config.download_cache_dir,
]:
if resolved_path.is_relative_to(special_directory):
return False
try:
model_id = self.register_path(model_path)
self._logger.info(f"Registered {model_path.name} with id {model_id}")
Expand Down