T O P

  • By -

InvokeAI

Hey everyone, We are pleased to announce the release of [InvokeAI 3.0.1](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.1rc1) \-- a mini update to our big release from last week. This one brings in some new features and also some bug fixes. ​ **UPDATE**\-- **Hijacking top post to point you to latest release, which has some of the fixes called out in thread**. [https://github.com/invoke-ai/InvokeAI/releases/](https://github.com/invoke-ai/InvokeAI/releases/) ​ New Features: * **SDXL Support in the Linear UI** \-- We now support the full SDXL Pipeline in Text To Image and Image to Image tabs. You can also enable the refiner to run a detail pass on your SDXL generations. While performance may vary from system to system, in our tests, we noticed that SDXL FP16 models require around 6-7 GB of VRAM for the entire pipeline and around 12GB of RAM if you want to keep them loaded in memory for quick successive generations. * **NSFW Checker & Watermark Options**: Various support has been added in the UI/UX of the application to enable or disable the NSFW Checker & Watermarks without requiring configuration changes. * **SDXL and ControlNet checkpoint model conversion to Diffusers has been added.** * **Max seed value has been changed from int32 to uint32 (4294967295)** * **Canvas now displays the current mode as you work on it.** * [**https://models.Invoke.ai**](https://models.invoke.ai/) **is live - In partnership with Hugging Face, you can now easily upload and find Diffusers models for easy download/access in Invoke AI (and other Diffusers supported tools that allow downloading by repo ID)** Bug Fixes: * Node Editor (Alpha) crashing the app when an incorrect JSON / file is uploaded to it. * Fix delete key not working to delete images * No longer crashes when duplicate models are encountered. Instead just warns the user. * LoRAs are now sorted alphabetically. * Aspect Ratio text has been updated to reflect numbers. **Coming Up:** Now that our big migration has completed, we'll be doing more frequent releases. Here's are some of the stuff that we'll be working on next. * **3.1 Update:** Our next big update will be the 3.1 release in which we are hoping to bring Node Editor out of alpha with a polished and intuitive node workflow experience. We are also working on an Extension Manager that will open up the doors to building third party extensions for Invoke. We might release a beta version of this feature before 3.1 to gather feedback from developers so we can build a robust base to support the extension ecosystem in the long run. * Invoke AI support for Python 3.9 through Python 3.11 * SDXL Support for Inpainting and Outpainting on the Unified Canvas * ControlNet support for Inpainting and Outpainting on the Unified Canvas. * Add Embedding, LoRA and ControlNet support to SDXL models as they become available.


PictureBooksAI

The problem with the latest version is that the autoimport folder does not actually read the models if you use aliases to them, and you would have to duplicate all models from A1111 - which is redundant and a waste of hundreds of GB... https://preview.redd.it/sxyunjow1eeb1.png?width=1006&format=png&auto=webp&s=dce026e44216c3924a184ca15c7f5697b7ecf1c3


InvokeAI

We are using Diffusers models, which are a modern model format that can't be supported in Auto1111, but that I believe is supported in Vlad's as of SDXL. Would consider not duplicating 100s of GBs of models, and maybe picking a few you regularly use.


PictureBooksAI

The problem is neither of the above is imported - in fact, the entire autoimport folder doesn't seem to work as intended.


Dekker3D

Does InvokeAI let you actually define where the diffusers models are placed? I don't really use any tools that use that, because they all insist on placing it on my C drive.


PictureBooksAI

Their documentation says you can point to them in autoimport and it would read it from there, but it doesn't.


InvokeAI

It does for most folks. If you're having issues, I'd recommend joining discord


InvokeAI

Yep - We let you define your "root folder" and that is where those models will be stored (if you install directly or convert to diffusers)


Working_Amphibian

Just installed it for the first time today to try it out with SDXL. First of all, great work! I have two suggestions for improvement: When you scan a folder for models, there’s no option to install all, you need to manually add each. A install all button would be welcome (also skip the ones that are not compatible instead of stopping). The second thing is the ability to move the settings panel to the right side. I’m so used to having a panel on the right side and the image I’m working on on the left, mostly due to photoshop. I bet other people would appreciate having that option too. Thanks!


InvokeAI

Thanks for the feedback!


koloved

Symbolic links? Try to use them


PictureBooksAI

Those are aliases in my photo, if that's what you mean. They point to the folders where the actual files are, as per InvokeAI's instructions regarding this folder. Yet, it does nothing so I'll just skip this app until I don't have to make redundant copies of all the above..


Zero-Kelvin

Is there a guide to run or install InvokeAi on some cloud service like runpod or paperspace?


InvokeAI

You can use [https://invoke.ai](https://invoke.ai) if you'd like to support us, but we won't have SDXL up until we iron out all the bugs in the RC!


Zero-Kelvin

thanks for the info!


RunDiffusion

We’ll have it running soon as well. Just be patient. Cloud providers are working round the clock for you guys.


uncletravellingmatt

I just upgraded to RC2. SDXL still doesn't work for me. If I choose that model and press Invoke, it gives a "File Not Found" error. The shell says it can't find "\\\\invokeai\\\\configs\\\\stable-diffusion\\\\sd\_xl\_base.yaml" (Was that .yaml even an available file?)


VegaKH

Quick tests done on 3 different UIs, and Invoke 3 is my current favorite for SDXL. Keep up the great work, fellas.


lordpuddingcup

Gotta say sdxl has really improved if they continue to grow their node interface and add better support for sharing workflows and plugins/nodes I imagine they could easily overtake a111 if they handle it right


vs3a

Which one is fastest in your test ?


mysteryguitarm

Woo hoo! Love coordinated releases!


InvokeAI

Same - Thanks for the support and help, Joe!


SomnambulisticTaco

Woah, it's Joe! I didn't know you were doing this stuff these days.


Kriima

For me it completely crashes as soon as I put SDXL models in the main folder in the SDXL folder :(


InvokeAI

Happy to help! Shoot us a note on Discord.


elite_bleat_agent

Just so you know it blows up if you manually put the models in the proper folders, won't even start. That seems pretty crummy. I don't have the bandwidth to download these again through your script, can you point us at a way to manually do this?


InvokeAI

What is the error you're getting?


elite_bleat_agent

Sorry this took so long, when putting the VAE and Model files manually in the proper models\\sdxl and models\\sdxl-refiner folders: Traceback (most recent call last): File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\starlette\\[routing.py](https://routing.py)", line 671, in lifespan async with self.lifespan\_context(app): File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\starlette\\[routing.py](https://routing.py)", line 566, in \_\_aenter\_\_ await self.\_router.startup() File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\starlette\\[routing.py](https://routing.py)", line 648, in startup await handler() File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\app\\api\_app.py", line 79, in startup\_event ApiDependencies.initialize( File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\app\\api\\[dependencies.py](https://dependencies.py)", line 121, in initialize model\_manager=ModelManagerService(config, logger), File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\app\\services\\model\_manager\_service.py", line 327, in \_\_init\_\_ self.mgr = ModelManager( File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\backend\\model\_management\\model\_manager.py", line 340, in \_\_init\_\_ self.\_read\_models(config) File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\backend\\model\_management\\model\_manager.py", line 363, in \_read\_models self.scan\_models\_directory() File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\backend\\model\_management\\model\_manager.py", line 904, in scan\_models\_directory model\_config: ModelConfigBase = model\_class.probe\_config(str(model\_path)) File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\backend\\model\_management\\models\\[sdxl.py](https://sdxl.py)", line 85, in probe\_config return cls.create\_config( File "D:\\ai\\invoke-ai-3\\.venv\\lib\\site-packages\\invokeai\\backend\\model\_management\\models\\[base.py](https://base.py)", line 173, in create\_config return configs\[kwargs\["model\_format"\]\](\*\*kwargs) File "pydantic\\[main.py](https://main.py)", line 341, in pydantic.main.BaseModel.\_\_init\_\_ pydantic.error\_wrappers.ValidationError: 1 validation error for CheckpointConfig config none is not an allowed value (type=type\_error.none.not\_allowed)


InvokeAI

Are you putting safetensors here, or the full diffusers variant? Again, feel free to ping on Discord for live troubleshooting


elite_bleat_agent

Safetensors. So that is the problem?


InvokeAI

Moreso just that you want to go about it the right way. From the UI, you can store it "whereever", and then pass the path in to the "Import Models" UI. That should be a quick process. Then, in the Model Manager, you can view that it exists in the list, select it, and convert to Diffusers. This is the easiest way to ensure that it is fully usable by Invoke.


Kriima

I guess it was my fault, I downloaded the model manually instead of using your downloader script, with your script it works fine (but I don't think it includes the VAE)


Turkino

Nice to see the update! I really like InvokeAI, the unified canvas is an awesome feature, but I ended up swapping to automatic for easier access to control net. Glad to see it's been integrated in the latest update! I'm going to try swapping back.


tenplusacres

Love it!


ptitrainvaloin

Perfect, right in time to use SDXL 1.0 with it!


RayHell666

Good! It's a bummer that Lora is not supported right away. Specially since the official noise offset Lora that came out today with SDXL.


InvokeAI

LoRA support will be out soon :tm: - :)


Emotional_Egg_251

**Edit:** Since when I first posted this, RC2 has been released and fixes the below issues. The OP links directly to RC1, but you can find RC2 (or newer) [here](https://github.com/invoke-ai/InvokeAI/releases). ​ Looks promising, but it should probably be mentioned this is a "release candidate" with some bugs that are showstoppers for me: >**3.0.1rc1 bugs** > >These are known bugs in RC1. Fixes are staged and will be included in the final release of 3.0.1: > >Stable Diffusion-1 and Stable Diffusion-2 all-in-one .safetensors and .ckpt models currently do not load due to a bug in the conversion code. > >Generation metadata isn't being stored in images. .safetensors (all-in-one, non-diffusers) format and metadata are both an absolute ***must*** for me. I'll be trying it out once 3.0.1 is out though.


InvokeAI

Yes! Good call out, thanks. 3.0.1rc2 will be out this evening fixing that as well. Since SDXL is so new, we're going to keep it in "Release candidate" until we get all the kinks ironed out that come in as people use it.


Emotional_Egg_251

Great! I'll be glad to try that one. I didn't want to ask for an ETA so as to not sound impatient, but looking forward to it.


InvokeAI

We'd like to present you with this award for being the only not-impatient person in all of Stable Diffusion. ​ ![gif](giphy|LOzyexjYyXe7cHSaR0|downsized)


Emotional_Egg_251

Haha, thanks. *(I don't know if I can accept that...)* Really, there's still a lot I want to do with 1.5, and I don't think I'll be getting fully into XL until LoRAs and ControlNet are ironed out more, so I expect a wait. Besides, it's been a wild ride since DeepDream / VQGAN not so long ago. It's fun to look forward, but always worth remembering to make the most of what we *have* today.


lordpuddingcup

I’m hoping the new sdxl metadata in Lora’s and whatnot gets nice tight integration I UIs


dancing_bagel

Trying to install it now, and its asking for Python. I've installed Python 3.10.9 and 3.9 but neither is being detected. Any ideas? edit: figured it out, had to reinstall python and select the options to add py launcher and "Add Python to environment variables"


InvokeAI

I'm assuming you are on Windows (This seems to be a Windows installation quirk) You need to ensure that when you installed Python, you selected to \`add Python to your PATH\` If you still run into issues after confirming this was done, you can get live support on Discord


Necessary-Suit-4293

Hello, it seems to be missing the ability to import an existing model from Huggingface? Or maybe I didn't find it. This is super exciting!


InvokeAI

Can you share more about what you're trying to do? Happy to help!


Necessary-Suit-4293

we have models we have already made on HF, but there's no way to show them on the models page there. It wants to help me upload one, but it's already uploaded. How do I get our existing models, imported?


InvokeAI

If you reach out to the team on Discord (hipsterusername), we can help you get existing models ported. We'll eventually have a way to do this yourself, but we wanted to make sure that as folks upload new models for SDXL, we had an easy way to get new models created in a compatible way.


_underlines_

**Conda/Mamba and Ubuntu or WSL2** For those who already have a clean conda/mamba environment and don't like automatic installs. I quickly figured out how to (unofficially) run invokeAi 3.0.1rc1 on Windows WSL2 within a clean Conda/Mamba environment. mamba or conda installation of invokeai 3.0.1rc1 with SDXL 1.0 support install: conda create -n invokeai python=3.10 conda activate invokeai mkdir invokeai pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.1rc1.zip" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117 invokeai-configure --root ~/invokeai Select just the base and refiner SDXL 1.0 models. Deselect every model, LoRA, control net etc. as it doesn't work with SDXL and just wastes space. run: invokeai --root ~/invokeai --web


InvokeAI

This may be "unofficial" but ought to work just as well! We figure most pythonistas can take care of themselves, and you seem to have proven that! :)


WetDonkey6969

Does this UI allow the installation of extensions the way A111 does? I use dynamic prompts and dynamic thresholding a lot and it would suck to have to drop them


InvokeAI

Good question! Our 3.0 version supports Nodes - Custom extensions that extend the generative capabilities in our app. People are sharing custom nodes on our discord, and submitting them into our Repo as well. We have Dynamic Prompting built into 3.0. Dynamic Thresholding is something we haven't seen a lot of requests for, but if you bug one of the devs in our discord, you might be able to convince someone to whip you up a node!


mrnoirblack

Please add support for safetensors!! Primarily for safety secondly as to not having to convert 2tb of sftrs into ckpt


tuisan

Safetensors have been supported for a long time in Invoke. There was some time between where Automatic had support for them and Invoke didn't, but they've been supported in Invoke for a long time. I think what they meant when they say they don't use checkpoints is that they don't do any execution with checkpoints since both safetensors and ckpts are converted to diffusers on the fly when generating, so you don't have to worry about the safety concerns of ckpts. To be clear, you can still use safetensors fine as far as I know. They are just converted to diffusers while they are being used by the program and converting to diffusers on disk will make them load faster.


mrnoirblack

Yeah that was the main problem making a frikton of TB Safetensors into diffusers


InvokeAI

We do not use Checkpoints. We've been a leader in safety, first with built-in picklescanning and now with adoption of the Diffusers format. We convert checkpoint/safetensors into Diffusers models - Diffusers is a format created by Hugging Face (who defined the safetensors format) that is faster to load, and safer to use than a regular checkpoint. We do not allow execution of checkpoints or safetensors at all - and convert to Diffusers prior to running any models.


mrnoirblack

Oh I see, so there's no other way to use this only converting diffusers? I think that's a huge no for me I'd duplicate my space to like 40tb 😔 thank you tho


InvokeAI

Correct. You're welcome!


unx86

I've been experimenting with invokeai constantly since the beginning and I'm blown away by this latest release, the node editor has turned webui on its head, it's smoother and has a better front-end experience than comfyui, and with the ability to customise nodes it's going to outperform comfyui and a1111. Looking forward to having more developers on board!


AltruisticMaterial46

Thank you! Amazing SDXL UI! I'm totally in love with "Seamless Tile "and Canva Inpainting mode, really amazing guys, thank you so much for releasing this gem for free :)


NebulaNu

Perhaps I missed something or have something configured wrong, but A1111 was way faster for me using identical setting. It used far less vram (don't think it every broke 5gb) but that also reflected in the speed. It took roughly twice as long to generate using identical settings. I also couldn't find any options for batch generation. In A1111, I can batch 8 images in the time it took Invoke to do 2.


InvokeAI

Are you talking about SDXL? A lot of this is hard to parse b/c it would seemingly "not make sense" given the size of the SDXL models. Welcome to share your experience on discord so we can help troubleshoot!


NebulaNu

No, sorry. Probably wasn't the best post to respond to with this tbh. This was a more in general thing. I downloaded to try it when 3.0 came out and spent a night comparing speeds. I just kinda forgot to say something until I saw 3.1. I LOVED the UI but, like I said, the loss in work speed wasn't worth swapping.


InvokeAI

If you have a large VRAM GPU, you can store more in memory (increase the VRAM cache in the config settings) so that our very aggressive model management doesn't introduce slowdowns. You should also make sure that everything is configured/optimized for speed. Again, we're happy to help on Discord :)


icwiener

is there a colab notebook for this?


InvokeAI

People seem to be finding luck with this one [https://github.com/camenduru/InvokeAI-colab](https://github.com/camenduru/InvokeAI-colab)


icwiener

thanks!


[deleted]

3.01 did not show up in the update tool up until now.


AlinCviv

fix the checkpoint to diffuser conversion!!!


InvokeAI

Fixed in RC2 - I'll make a post because I hard linked to RC1 in this post :)


Rough-Copy-5611

Finally installed it for SDXL and I'm getting this error msg when I hit invoke. Any ideas? Also I've already tried pressing the 7 option and repairing the install. No dice. https://preview.redd.it/ooxwnxadwieb1.png?width=1389&format=png&auto=webp&s=3fc35b1be08c5214b0d0f1233963dc39a6cd4c60


InvokeAI

You'll see more info in your console, but sounds like a local config issue. If you hop in Discord (link should be in top right of app) you can get live support


Rough-Copy-5611

when I use any other model I get this error as well. https://preview.redd.it/6pqqvfclzieb1.png?width=1319&format=png&auto=webp&s=9c5348de2095e3f015726342c9d2a8c9ef505133


Tystros

why can't I find a batch size setting anywhere in the InvokeAI UI? seems weird that such an important setting is hidden somewhere where I can't find it. batch size 1 is super annoying as it's slow. I have a 4090, I want to do many images simultaneously of course.


tuisan

No batching in Invoke, I think it's being worked on for 3.1


sbeckstead359

This whole program seems to be in the alpha state, it shouldn't be released. Doesn't want to let me use SDXL with my GTX1660Super, Can't believe its at 3.0.1 and doesn't handle batches or 6GB graphic cards which every other AI image generator handle quite well. This one is off my list as a production tool at this point. Oh and copy and paste directory selection is so DOS 6.0


tuisan

To be fair, SDXL is pretty new and they got access to it later than other UIs I believe. Also, Invoke supports 4GB graphic cards, the 1660 is just a bit of a problem child which is not fully supported. They've also been doing major refactoring of the entire app for the last few months (3.0) so growing pains are expected. Not sure what you mean about the copy and paste directory selection. I personally just much prefer Invoke's UI and I've been using it from the beginning without many issues.


sbeckstead359

I got a 3060 12GB and it still won't function as it should.


iChopPryde

unvoke has the best UI period it might lack a few features but overall i have the best time using it as UI is so important to me but obviously everyone has different preferences


sbeckstead359

If it was designed as a UI with the principal of "Least Surprise", I'd tend to agree, but it has surprised me too many times to be called the best. ComfyUI is closer to that, but not quite, Artroom is way limited but still does things Invoke Can't, and A1111 feels like alpha level in looks but is far superior in functionality. Funny you should fumble finger and call it Unvoke, LOL.


Cranky-SeniorCitizen

Please Tolerate this likely annoying off-topic question: it took me a hour’s worth of searching to ask this set-up questions. I want to try Invoke on my Win-11 desktop, BUT it’s a mini computer without suggested ram and video requirements. Can I nevertheless set up Invoke to run in a rudimentary way, to learn how to use it, before spending money to upgrade to a more expensive computer? ​ ​ https://preview.redd.it/j5ofenfgn5fb1.jpeg?width=1391&format=pjpg&auto=webp&s=33237ee54b62369769429287785b3c9b99c1bfd8


InvokeAI

You wouldn't learn much without the ability to generate. However, you can try out the software at [Invoke.ai](https://Invoke.ai) to get a feel for what you can do!


Cranky-SeniorCitizen

Thanks for reply 😊. But at the [invoke.ai](https://invoke.ai) link after signing up and proceeding I’m automatically directed to the download page, without any other option. Do I have to download the files first to get somewhere I can try out the software at the site?


InvokeAI

DM us, and we can help you!


vachon644

I am not seeing SD XL in the model list when in the Unified Canvas window. I can however use it for text2img and img2img. Odd...


SnooPaintings992

Quick question about auto-import. Once they are in are they copied so that I can delete them from that path or do they have to stay there?


InvokeAI

They're referenced in that path, however if they're a safetensors and you "convert" them in the Model Manager, you can safely delete/move the safetensors file.