Skip to content

Releases: invoke-ai/InvokeAI

InvokeAI Version 2.3.5.post1

19 May 00:00
Compare
Choose a tag to compare

We are pleased to announce a minor update to InvokeAI with the release of version 2.3.5.post1.

What's New in 2.3.5.post1

The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.

Here are the new library versions:

Library Version
Torch 2.0.0
Diffusers 0.16.1
Xformers 0.0.19
Compel 1.1.5

Other Improvements

When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running (thanks to @pedantic79 for this).

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.5.post1 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.5.post1.zip

If you are using the Xformers library, please do not use the built-in updater to update, as it will not update xformers properly. Instead, either download the installer and ask it to overwrite the existing invokeai directory (your previously-installed models and settings will not be affected), or use the following recipe to perform a command-line install:

  1. Start the launcher script and select option # 8 - Developer's console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

If you do not use Xformers, the built-in update option (# 9) will work, as will the above command without the "[xformers]" part.

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5.post1. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5.post1

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (mid-May, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Full Changelog: v2.3.4.post1...v2.3.5-rc1

What's Changed

  • Update dependencies to get deterministic image generation behavior (2.3 branch) by @lstein in #3353
  • [Bugfix] Update check failing because process disappears by @pedantic79 in #3334
  • Turn the HuggingFaceConceptsLib into a singleton to prevent redundant connections by @lstein in #3337

New Contributors

Full Changelog: v2.3.5...v2.3.5.post1

InvokeAI 2.3.5

27 Apr 12:30
Compare
Choose a tag to compare

We are pleased to announce a features update to InvokeAI with the release of version 2.3.5. This is currently a pre-release for community testing and bug reporting.

What's New in 2.3.5

This release expands support for additional LoRA and LyCORIS models, upgrades diffusers to 0.15.1, and fixes a few bugs.

LoRA and LyCORIS Support Improvement

  • A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
  • Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
  • Support for the newer LoKR LyCORIS files has been added.

Diffusers 0.15.1

  • This version updates the diffusers module to version 0.15.1 and is no longer compatible with 0.14. This provides a number of performance improvements and bug fixes.

Performance Improvements

  • When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.

Bug Fixes

  • The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.5 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.
InvokeAI-installer-v2.3.5.zip

To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.5. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.5. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.5

These are known bugs in the release.

  1. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
  2. If the xformers memory-efficient attention module is used, each image generated with the same prompt and settings will be slightly different. xformers 0.0.19 reduces or eliminates this problem, but hasn't been extensively tested with InvokeAI. If you wish to upgrade, you may do so by entering the InvokeAI "developer's console" and giving the command pip install xformers==0.0.19. You may see a message about InvokeAI being incompatible with this version, which you can safely ignore. Be sure to report any unexpected behavior to the Issues pages.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Development Roadmap

This is very likely to be the last release on the v2.3 source code branch. All new features are being added to the main branch. At the current time (late April, 2023), the main branch is only partially functional due to a complex transition to an architecture in which all operations are implemented via flexible and extensible pipelines of "nodes".

If you are looking for a stable version of InvokeAI, either use this release, install from the v2.3 source code branch, or use the pre-nodes tag from the main branch. Developers seeking to contribute to InvokeAI should use the head of the main branch. Please be sure to check out the dev-chat channel of the InvokeAI Discord, and the architecture documentation located at Contributing to come up to speed.

Change Log

  • fix the "import from directory" function in console model installer by @lstein in #3211
  • [Feature] Add support for LoKR LyCORIS format by @StAlKeR7779 in #3216
  • CODEOWNERS update - 2.3 branch by @lstein in #3230
  • Enable LoRAs to patch the text_encoder as well as the unet by @damian0815 in #3214
  • improvements to the installation and upgrade processes by @lstein in #3186
  • Revert "improvements to the installation and upgrade processes" by @lstein in #3266
  • [Enhancement] distinguish v1 from v2 LoRA models by @lstein in #3175
  • increase sha256 chunksize when calculating model hash by @lstein in #3162
  • bump version number to 2.3.5-rc1 by @lstein in #3267
  • [Bugfix] Renames in 0.15.0 diffusers by @StAlKeR7779 in #3184

New Contributors and Acknowledgements

  • @AbdBarho contributed the checksum performance improvements
  • @StAlKeR7779 (Sergey Borisov) contributed the LoKR support, did the diffusers 0.15 port, and cleaned up the code in multiple places.

Many thanks to these individuals, as well as @damian0815 for his contribution to this release.

Full Changelog: v2.3.4.post1...v2.3.5-rc1

InvokeAI Version 2.3.4.post1 - A Stable Diffusion Toolkit

07 Apr 15:00
d81584c
Compare
Choose a tag to compare

We are pleased to announce a features update to InvokeAI with the release of version 2.3.4.

Update: 13 April 2024 - 2.3.4.post1 is a hotfix that corrects an installer crash resulting from an update to the upstream diffusers library. If you have recently tried to install 2.3.4 and experienced a crash relating to "crossattention," this release will fix the issue.

What's New in 2.3.4

This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.

LoRA and LyCORIS Support

LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)

To use LoRA/LyCORIS models in InvokeAI:

  1. Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.

  2. Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:

family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)

Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.

  1. Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.

  2. You can change the location of the loras directory by passing the --lora_directory option to `invokeai.

New WebUI LoRA and Textual Inversion Buttons

This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.

old-sea-captain-annotated

Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.

Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.

By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.

Minor features and fixes

This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.4 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.4.post1.zip

To update from versions 2.3.1 or higher, select the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.4. Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.4. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page (Pre-release note: this will only work after the official release.)

Known Bugs in 2.3.4

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

Getting Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

Change Log

New Contributors and Acknowledgements

Many thanks to these individuals, as well as @blessedcoolant and @damian0815 for their contributions to this release.

Full Changelog: v2.3.3...v2.3.4rc1

InvokeAI Version 2.3.3 - A Stable Diffusion Toolkit

28 Mar 04:50
fd74f51
Compare
Choose a tag to compare

We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.3.

What's New in 2.3.3

This is a bugfix and minor feature release.

Bugfixes

Since version 2.3.2 the following bugs have been fixed:

Bugs

  1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
  2. Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
  3. The batch script log file names have been fixed to be compatible with Windows.
  4. Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
  5. Support loading of legacy config files that have no personalization (textual inversion) section.
  6. An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
  7. Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.

Enhancements

  1. It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
  2. The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
  3. If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
  4. On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
  5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt      # or my-favorite-model.vae.safetensors

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.3 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.3.zip

To update from 2.3.1 or 2.3.2 you may use the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.3.

Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.3. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.3

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

What's Changed

  • Enhance model autodetection during import by @lstein in #3043
  • Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in #3058
  • Add support for the TI embedding file format used by negativeprompts.safetensors by @lstein in #3045
  • Keep torch version at 1.13.1 by @JPPhoto in #2985
  • Fix textual inversion documentation and code by @lstein in #3015
  • fix corrupted outputs/.next_prefix file by @lstein in #3020
  • fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
  • Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
  • prevent infinite loop when launching developer's console by @lstein in #3016
  • Prettier console-based frontend for invoke.sh on Linux systems with "dialog" installed by Joshua Kimsey.
  • ROCM debugging recipe from @EgoringKosmos

Full Changelog: v2.3.2.post1...v2.3.3-rc1

Acknowledgements

Many thanks to @psychedelicious, @blessedcoolant (Vic), @JPPhoto (Jonathan Pollack), @ebr (Eugene Brodsky) @JoshuaKimsey, @EgoringKosmos, and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.

Full Changelog: v2.3.2.post1...v2.3.3

InvokeAI Version 2.3.2

12 Mar 02:58
a044403
Compare
Choose a tag to compare

We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.2.

What's New in 2.3.2

This is a bugfix and minor feature release.

Bugfixes

Since version 2.3.1 the following bugs have been fixed:

  1. Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
  2. Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
  3. The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
  4. When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
  5. Crashes that occurred during model merging.
  6. Restore previous naming of Stable Diffusion base and 768 models.
  7. Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.

As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.

New "Invokeai-batch" script

2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:

a shack in the mountains, photograph
a shack in the mountains, watercolor
a shack in the mountains, oil painting
a chalet in the mountains, photograph
a chalet in the mountains, watercolor
a chalet in the mountains, oil painting
a shack in the desert, photograph
...

If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).

To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.2 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.2.post1.zip

To update from 2.3.1 you may use the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script. Alternatively, you may use the installer. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.2. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.2

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.

What's Changed

  • fix python 3.9 compatibility by @mauwii in #2780
  • fixes crashes on merge in both WebUI and console by @lstein in #2800
  • hotfix for broken merge function by @lstein in #2801
  • [ui]: 2.3 hotfixes by @psychedelicious in #2806
  • restore previous naming scheme for sd-2.x models: by @lstein in #2820
  • quote output, embedding and autoscan directores in invokeai.init by @lstein in #2827
  • Introduce pre-commit, black, isort, ... by @mauwii in #2822
  • propose more restrictive codeowners by @lstein in #2781
  • fix newlines causing negative prompt to be parsed incorrectly by @lstein in #2838
  • Prevent crash when converting models from within CLI using legacy model URL by @lstein in #2846
  • [WebUI] Fix 'Use All' Params not Respecting Hi-Res Fix by @blhook in #2840
  • Disable built-in NSFW checker on models converted with --ckpt_convert by @lstein in #2908
  • Dynamic prompt generation script for parameter scans by @lstein in #2831

Full Changelog: v2.3.1...v2.3.2

Acknowledgements

Many thanks to @mauwii (Matthias Wilde), @psychedelicious, @blessedcoolant (Vic), @blhook (Pull Shark), and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.

v.2.3.1.post2

22 Feb 20:28
650f4bb
Compare
Choose a tag to compare

We are pleased to announce a bugfix and quality of life update to InvokeAI with the release of version 2.3.1.

What's New in 2.3.1

This is primarily a bugfix release, but it does provide several new features that will improve the user experience.

Enhanced support for model management

InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.

There are three ways of accessing the model management features:

  1. From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.

image

  1. Using the Model Installer App

Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.

Command-line users can start this app using the command invokeai-model-install.

image

  1. Using the Command Line Client (CLI)

The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.

Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.

Please see INSTALLING MODELS for more information on model management.

An Improved Installer Experience

The installer now launches a console-based UI for setting and changing commonly-used startup options:

image

After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options

Command-line users can launch the new configure app using invokeai-configure.

This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.

image

Command-line users can run this interface by typing invokeai-configure

Image Symmetry Options

There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).

image

A New Unified Canvas Look

This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:

image

Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:

image

Model conversion and merging within the WebUI

The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.

An easier way to contribute translations to the WebUI

We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.

Numerous internal bugfixes and performance issues

This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.

Summary of InvokeAI command line scripts (all accessible via the launcher menu)

Command Description
invokeai Command line interface
invokeai --web Web interface
invokeai-model-install Model installer with console forms-based front end
invokeai-ti --gui Textual inversion, with a console forms-based front end
invokeai-merge --gui Model merging, with a console forms-based front end
invokeai-configure Startup configuration; can also be used to reinstall support models
invokeai-update InvokeAI software updater

Installation

To install or upgrade to InvokeAI 2.3.1, please download the zip file below, unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.1.post2.zip

If you are upgrading from an earlier version of InvokeAI, run the installer and when it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.1. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Last Feature Release on the 2.3.x Branch

This will be the last feature release on the 2.3.x branch. The development team is migrating to a new software architecture called Nodes, which will provide enhanced workflow management features as well as a much easier way for community developers to contribute to the project. We anticipate the transition taking 4-8 weeks (spring 2023). Until that time, we will be releasing bugfixes and other minor updates only.

Known Bugs in 2.3.1

These are known bugs in the release.

  1. MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issu...
Read more

InvokeAI Version 2.3.0

03 Feb 03:26
f5d1fbd
Compare
Choose a tag to compare

We are pleased to announce a features and performance update to InvokeAI with the release of version 2.3.0.

What's New in 2.3.0

There are multiple internal and external changes in this version of InvokeAI which greatly enhance the developer and user experiences respectively.

Migration to Stable Diffusion diffusers models

Previous versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as "checkpoint", or "legacy" format, there is a single large weights file ending with .ckpt or .safetensors. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called diffusers and consists of a directory of individual models. The most immediate benefit of diffusers is that they load from disk very quickly. A longer term benefit is that in the near future diffusers models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.

When you perform a new install of version 2.3.0, you will be offered the option to install the diffusers versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.

To take advantage of the optimized loading times of diffusers models, InvokeAI offers options to convert legacy checkpoint models into optimized diffusers models. If you use the invokeai command line interface, the relevant commands are:

  • !convert_model -- Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a diffusers model, and import it into InvokeAI's models registry file.
  • !optimize_model -- If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named diffusers model, optionally deleting the original checkpoint file.
  • !import_model -- Take the local path of either a checkpoint file or a diffusers model directory and import it into InvokeAI's registry file. You may also provide the ID of any diffusers model that has been published on the HuggingFace models repository and it will be downloaded and installed automatically.

The WebGUI offers similar functionality for model management.

For advanced users, new command-line options provide additional functionality. Launching invokeai with the argument --autoconvert <path to directory> takes the path to a directory of checkpoint files, automatically converts them into diffusers models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the --ckpt_convert argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a diffusers model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.

Please see INSTALLING MODELS for more information on model management in both the command-line and Web interfaces.

Support for the XFormers Memory-Efficient Crossattention Package

On CUDA (Nvidia) systems, version 2.3.0 supports the XFormers library. Once installed, thexformers package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. xformers will be installed and activated automatically if you specify a CUDA system at install time.

The caveat with using xformers is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable xformers and restore fully deterministic behavior, you may launch InvokeAI using the --no-xformers option. This is most conveniently done by opening the file invokeai/invokeai.init with a text editor, and adding the line --no-xformers at the bottom.

A Negative Prompt Box in the WebUI

There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts ("mangled limbs, bad anatomy"). The [negative prompt] syntax continues to work in the main prompt box as well.

To see exactly how your prompts are being parsed, launch invokeai with the --log_tokenization option. The console window will then display the tokenization process for both positive and negative prompts.

Model Merging

Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in diffusers format, then launch the merger using a new menu item in the InvokeAI launcher script (invoke.sh, invoke.bat) or directly from the command line with invokeai-merge --gui. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged diffusers model and import it into InvokeAI for your use.

See MODEL MERGING for more details.

Textual Inversion Training

Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as "pointillist-style". After successful training, The subject or style will be activated by including <pointillist-style> in your prompt.

Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any diffusers model. To access training you can launch from a new item in the launcher script or from the command line using invokeai-ti --gui.

See TEXTUAL INVERSION for further details.

A New Installer Experience

The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command pip install InvokeAI --use-pep517. Please see Installation for details.

Developers should be aware that the pip installation procedure has been simplified and that the conda method is no longer supported at all. Accordingly, the environments_and_requirements directory has been deleted from the repository.

Installation

To install or upgrade to InvokeAI 2.3.0, please download the zip file below, unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.0.zip

If you are upgrading from an earlier version of InvokeAI, all you have to do is to run the installer for your platform. When the installer asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.1. You can see which versions are available by going to The PyPI InvokeAI Project Page

Command-line name changes

All of InvokeAI's functionality, including the WebUI, command-line interface, textual inversion training and model merging, can all be accessed from the invoke.sh and invoke.bat launcher scripts. The menu of options has been expanded to add the new functionality. For the convenience of developers and power users, we have normalized the names of the InvokeAI command-line scripts:

  • invokeai -- Command-line client
  • invokeai --web -- Web GUI
  • `invokeai-me...
Read more

InvokeAI Version 2.2.5 - A Stable Diffusion Toolkit

01 Jan 19:35
Compare
Choose a tag to compare

We are pleased to announce a features and bugfix update to InvokeAI with the release of version 2.2.5.

What's New in 2.2.5

WebUI

  • The WebGUI now features a Model Manager that lets you load and edit models interatively. It also allows you to pick a folder to scan and import new .ckpt files @blessedcoolant
  • Add Unified Canvas Alternate UI Beta: We added a new alternative UI to the Unified Canvas that mimics traditional photo editing applications you might be familiar with. You can switch to this new UI in the Settings menu by activating the new toggle option. @blessedcoolant
  • Restore and Upscale hotkeys have been changed from ‘R’ and ‘U’ to ‘Shift+R’ and ‘Shift+U’ respectively. This was done to avoid accidental keystrokes triggering these operations. @blessedcoolant
  • Added Localization. Support has been added for Russian, Italian, Portuguese (Brazilian), German, Polish @blessedcoolant
    Translators:

If you are interested in translating InvokeAI to your language, please feel free to reach out to us on Discord.

CLI

  • Add the --karras_max option to the command line. @lstein
  • Add the –version option to get the version of the app. @lstein
  • Remove requirement for Hugging Face token, now that it is no longer rqeuired. @ebr

Docker

  • Optimize dockerfile. @mauwii
  • Allow usage of GPU’s in Docker. @xrd

Bug Fixes & Updates

  • Fix not being able to load the model while inpainting when using the free_gpu_mem option. @rmagur1203
  • Various installer improvements. @lstein
  • Fix segfault error on MacOS when using homebrew. @ebr
  • Fix a None type error when nsfw_checker was turned on. @limonspb
  • Fix the number of tokens to cap to 75 and handle blends accordingly. @damian0815
  • [CLI] Fix the time step not displaying correctly during img2img. @wfng92
  • [WebUI] Fix the initial theme setting not displaying correctly in the selector after reload. @kasbah
  • [WebUI] Fix of Hires Fix on Img2Img tab @hipsterusername
  • Fix embeddings not working correctly. @blessedcoolant
  • Fix an issue where the —config launch argument was not being recognized. @blessedcoolant
  • Retrieve threshold from an image even if it is 0. @JPPhoto
  • Add –root_dir as an alternate arg for –root during launch.
  • Relax HuggingFace login requirements during setup. @ebr
  • Fixed an issue where the --no-patchmatch would not work. @lstein
  • Fixed a crash in img2img @lstein
  • Documentation, updates, typos and fixes. @limonspb, @lstein, @hipsterusername, @mauwii

Developer

  • Add concurrency to Github actions. @mauwii
  • Github action to lint python files with pyflakes @keturn
  • Fix circular dependencies on the frontend @kasbah
  • Add Github action for linting the frontend. @kasbah
  • Fix all linting warnings on the frontend. @kasbah
  • Add auto formatting for the frontend. @kasbah

New Contributors

Full Changelog: v2.2.4...latest

Installation

To install InvokeAI 2.2.5 on a new system, please download the zip file below, unzip it, and run the script install.sh (Macintosh, Linux) or install.bat (Windows). A walkthrough can be found at Installation Overview .
InvokeAI-installer-v2.2.5p2-linux.zip
InvokeAI-installer-v2.2.5p2-mac.zip
InvokeAI-installer-v2.2.5p2-windows.zip

Upgrading

If you have InvokeAI 2.2.4 installed, you can upgrade it quickly using an update script. Download the zip file below, and unpack it. Place the file update.bat (Windows) or update.sh (Linux/Mac) into your invokeai folder, replacing the update script that was previously there. Then launch the new update script from the command line or by double-clicking.

InvokeAI-updater-v2.2.5p2.zip

Help

Please see the InvokeAI Issues Board or the InvokeAI Discord for assistance from the development team.

InvokeAI Version 2.2.4 - A Stable Diffusion Toolkit

11 Dec 05:39
bd0c0d7
Compare
Choose a tag to compare

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.

Version 2.2.4 is a bugfix release. The major user-visible change is that we have overhauled the installation experience to make it faster and more stable. Please see Installation Overview for instructions on using the new installer, and see the .zip files in the Assets section below for the installer for your preferred platform. Note that you will need to install Python 3.9 or 3.10 to use the new installation method.

The new installers are located here. They have been updated 13 December in order to prevent a segfault crash on certain Macintosh systems.

There are a number of installation-related changes that previous InvokeAI users should be aware of:

Everything now lives in the invokeai directory.

Previously there were two directories to worry about, the directory that contained the InvokeAI source code and the launcher scripts, and the invokeai directory that contained the models files, embeddings, configuration and outputs. With the 2.2.4 release, this dual system is done away with, and everything, including the invoke.bat and invoke.sh launcher scripts, now live in a directory named invokeai. By default this directory is located in your home directory (e.g. \Users\yourname on Windows), but you can select where it goes at install time.

InvokeAI-installer-2.2.4-p5-linux.zip
InvokeAI-installer-2.2.4-p5-mac.zip
InvokeAI-installer-2.2.4-p5-windows.zip

After installation, you can delete the install directory (the one that the zip file creates when it unpacks). Do not delete or move the invokeai directory!

The .invokeai initialization file has been renamed invokeai/invokeai.init

You can place frequently-used startup options in this file, such as the default number of steps or your preferred sampler. To keep everything in one place, this file has now been moved into the invokeai directory and is named invokeai.init.

To update from Version 2.2.3

The easiest route is to download and unpack one of the 2.2.4 installer files. When it asks you for the location of the invokeai runtime directory, respond with the path to the directory that contains your 2.2.3 invokeai. That is, if invokeai lives at C:\Users\fred\invokeai, then answer with C:\Users\fred and answer "Y" when asked if you want to reuse the directory.

The update.sh (update.bat) script that came with the 2.2.3 source installer does not know about the new directory layout and won't be fully functional.

To update to 2.2.5 (and beyond) there's now an update path.

As they become available, you can update to more recent versions of InvokeAI using an update.sh (update.bat) script located in the invokeai directory. Running it without any arguments will install the most recent version of InvokeAI. Alternatively, you can get set releases by running the update.sh script with an argument in the command shell. This syntax accepts the path to the desired release's zip file, which you can find by clicking on the green "Code" button on this repository's home page. Here are some examples:

# 2.2.4 release
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.4.zip

# 2.2.5 release  (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.5.zip

# current development version
update.sh https://github.com/invoke-ai/InvokeAI/archive/main.zip

# feature branch 3d-movies (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/3d-movies.zip

Other 2.2.4 Improvements

New Contributors

Full Changelog: v2.2.3...v2.2.4

InvokeAI 2.2.3

02 Dec 17:54
Compare
Choose a tag to compare

Note: This point release removes references to the binary installer from the installation guide. The binary installer is not stable at the current time. First time users are encouraged to use the "source" installer as described in Installing InvokeAI with the Source Installer

With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.

Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.

Update 1 December 2022 -

  • The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.

  • Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!

  • Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!

  • 1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.

  • Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.

  • DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.

First-time Installation

For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.

For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI and follow the instructions in Manual Installation.

Upgrading

For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:

environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU

Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:

Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml

Replace xxx and yyy with the appropriate OS and GPU codes.

Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml

When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.

conda env update
conda activate invokeai
python scripts/preload_models.py

Additional installation information, including recipes for installing without Conda, can be found in Manual Installation

Known Bugs

  1. If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
  2. The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
  3. InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
  4. The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.

Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main branch, so please make your pull requests against this branch.

Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.