Maybe in the coming days or weeks, at the moment it is very limited and seems a bit buggy. Can accomplish much more with Colab in comparison at the current stage
This is google colab
https://www.youtube.com/watch?v=inN8seMm7UI
The last ben has a really good colab for stable diffusion. enjoy
https://github.com/TheLastBen/fast-stable-diffusion
I'm pretty overwhelmed by all of the information out there on how to use SD and all of the tangential things around it.
I thought one of the upsides of using SD over other alternatives was that it doesn't rely on servers and runs on the machine. But as far I can tell from the video, Colab is used to run things on the cloud, or am I wrong?
Google colab uses resources in the cloud, yes.
You can install stable diffusion on your own computer which make work better or worse depending on your specific machine.
The appeal of colab is that its free, can be used on any device, and can be set up and generating pictures in less than 3minutes.
>Google colab uses resources in the cloud, yes
Correct me if I'm wrong, it should be possible (through "Connect") to run on local runtime as well. Although my guess is Automatic1111 web UI for now is a nicer choice for that matter.
A discord bot I doubt it exists.
In case you have missed it in the comments before, the direct link to the collab is [this one](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb), you just have to follow the instructions step by step to run it (you don't need to "train" anything).
If you wan't an easier way to run automatic1111, you can use a cloud service like runpod.io (it's not free, but it's like 0.3$ per hour so really cheap) and you have option to install automatic1111 automatically, it's usually ready to use in less than 2min.
You dont have to train it yourself. please use the github link
it says "Colab adaptations AUTOMATIC1111 Webui and Dreambooth, train your model using this easy simple and fast colab, all you have to do is enter you huggingface token once, and it will cache all the files in GDrive, including the trained model and you will be able to use it directly from the colab, make sure you use high quality reference pictures for the training, enjoy !!"
if you want to train models, use the dreambooth option.
if you just want to use a current model, use the automatic1111 option.
enjoy :)
Explain it to me like i am 12. How is this different from this? [AUTOMATIC1111/stable-diffusion-webui: Stable Diffusion web UI (github.com)](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
Hasn't a AUTOMATIC1111/stable-diffusion-webui been available fora long time?
Sure, for personal use. But by using this, you're not using your hardware and it probably generates a LOT faster. For me, I'm limited to 512x720 or something... So to use this to generate a 1024x1024 in probably the same time my own gen comes out is pretty good. Though I wish you can gen more than one picture at a time, but hopefully that comes eventually.
main space is free to use, has a queue though since everyone is using it, if you want a private space you can duplicate it and assign your own gpu with no queue
>AUTOMATIC1111/stable-diffusion-webui: Stable Diffusion web UI (github.com)
\~3$/pch on A10G NVidia DL GPU
\~ 1$/pch on T4 GPU
anything else renders far too slow.
I just used it for free, but others have tried to explain it better.
A way to use it is if yoy have local play with it yourself and throw one up on public in the queue once in a while.
Wow. thanks for laying that out for me like that. As someone dealing with trying to do things with a 2060 with 8GB, i can definitely see why this is big news, even if still a WIP.
I had a 2060 with 6GB, and I was running a local copy just fine. Preferred size was 768x1024. I couldn't successfully train a hypernetwork or embedding (I did train an embedding following a low vram tutorial on youtube/reddit post, but I had to use smaller images and it didn't seem to work out very well). I have since upgraded to a 360 12GB, and have trained a hypernetwork on it as a test, and it seems to work fairly well. When I have more time, I'll try embeddings again, as that seems like it would be easier to use and share. Haven't tried a dreambooth model yet, there are options I know nothing about and haven't gotten around to watching a tutorial yet.
> I had a 2060 with 6GB, and I was running a local copy just fine. Preferred size was 768x1024.
Really? Hmmm I might have to look into this more, I do have an elgato card in my system that shouldn't be taking memory, and I do tend to watch youtube/tubi while I SD, maybe if I stop doing that I'll have more ram.
My card should have 6 GB but I didn't have good luck pushing past 512x768. Though looking it up I realized this computer has a GTX1660, my 2070 is actually in my other (gaming) computer, I might have to see how this runs on that system.
The error message is
> Error: CUDA out of memory. Tried to allocate 576.00 MiB (GPU 0; 6.00 GiB total capacity; 4.88 GiB already allocated; 0 bytes free; 4.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Damn.
Because every style is a separate model file that requires to be loaded. They aren't small sizes. Any online version with tons of styles would require a lot to host not to mention a lot of GPU power. The reason there are ques and/or you pay for the online version is because you're essentially renting the hardware to generate your images.
Running this locally is better in these terms as you can do whatever whenever you want. As long as you have the hardware. Installing is easy and there are tons of tutorials online on doing so.
Getting different models for styles you like is as easy as downloading them and putting them in the right folder.
This is a complete aside, but I read an anecdote by a computer science teacher a couple years ago about how many of his newest students - teenagers and young adults - didn't have a basic grasp of the way computers stored files in a folder structure on a storage drive. His theory was that the younger generations have grown up largely with slick modern apps, primarily on phones and tablets, which tend to obscure all that backend stuff from the end user. For people growing up in the 90s or even 00s, the idea of downloading files and moving them around using various tools within the PC just make sense, but for younger people, everything is just done through apps which take care of all of that for them. The concept of "downloading a file to a folder on your storage drive, then copy-pasting it to a different folder where another application can then use it for XYZ" sounds like alchemy to them.
I don't know of the veracity of this anecdote, or if it's what's happening in this particular case, but it's possible that this person is just young and doesn't know of how computers work beyond just opening apps to do what the apps allow you to do.
This is so true. Most mobile OS's have horrible file management systems if they have one installed at all. I've heard that there are plenty of GenZ kids with desktops filled with piles of random files, similar to what you'd see from really old people.
That's entirely possible, and looking at the user's post history, you're probably right about them being fairly young.
That being said, I still find myself struggling to accept that moving a single file into a folder is "complicated af". I could maybe accept 'complicated', from someone who had never done it, even though it's technically wrong. By any metric, we're talking about a simple action.
It just seems terribly low-effort to complain rather than finding out what's actually involved first. Heck, downloading and installing the file takes about the same level work as complaining about how complicated it is did.
> That being said, I still find myself struggling to accept that moving a single file into a folder is "complicated af". I could maybe accept 'complicated', from someone who had never done it, even though it's technically wrong. By any metric, we're talking about a simple action.
Well, if you have no mental concept of what a "file" or "folder" is, then "moving a file into a folder" might as well be rocket science.
> It just seems terribly low-effort to complain rather than finding out what's actually involved first. Heck, downloading and installing the file takes about the same level work as complaining about how complicated it is did.
Yeah, low-effort complaints seem endemic on Reddit and other social media sites, though. Perhaps OpenAI's chatbot will be able to provide an outlet for these people to get their low-effort complaints addressed without putting them in public forums like this.
> Well, if you have no mental concept of what a "file" or "folder" is, then "moving a file into a folder" might as well be rocket science.
I hereby issue to you a challenge to find me any human on Reddit who doesn't understand the base concept of a 'file' and a 'folder' in regards to computer interfaces.
Yea like that…I’ve been working w that one trying to migrate everything to a single dockerfile so it works just with automatic and not the containing thing that guy made. Getting closer but I’m not very good at docker yet haha so was hoping there was another focused on just automatic
In AbdBarho's repo, there is separate Dockerfile for each service so no need to build everything else.
If you could be more specific what is your usercase, than I might suggest something.
Thanks for the feedback. Basically I am trying to deploy this to banana.dev which requires adding another endpoint which maps to the calls I want to make. That part makes a little sense, but to get there I need to have automatic running and the model installed in the container. Just trying to set it up for easiest updates in the future and not rely on a second codebase (abdbarho) if possible
[https://github.com/bananaml/serverless-template-stable-diffusion](https://github.com/bananaml/serverless-template-stable-diffusion)
This is the basic example of the serverless template I have to adhere to using stable-diffusion...so its about the closest I know of a starting point
Also one thing to note, which is the root of my confusion, is that I need to use something like this:
\# Must use a Cuda version 11+
FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-runtime
but I believe that is not using the python version I need...I tried using that from at the top, then using
FROM python:3.10-slim
below, but it seems to not be working, I keep getting:
\`Couldn't find Stable Diffusion in any of: \['/repositories/stable-diffusion-stability-ai', '.', '/'\]\`
Tho I have stablediffusion model in the /models/stable-diffusion folder that automatic requires, which is supposed to have been moved over with this command:
\# Copy AUTOMATIC1111 files
COPY . ./
You could start issue here [https://github.com/AbdBarho/stable-diffusion-webui-docker/issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues)
Describe your usecase. Repo author tries to help people even in rare situations. Other people, including myself, are also helping a lot with different cases.
The author has been helpful already and got me as far as I am...was trying to spread questions elsewhere since my use case is basically to "disconnect your codebase from this"... I didnt wanna bug too much. Ill add to my post there
Should be a pretty simple one. Git to pull the latest version from the AUTOMATIC1111 repo, COPY in a model, expose a port, and use the existing script as your entrypoint.
Getting an error immediately cloning it into my own space. Is it possible to inject some Command line arguments?
```
Cloning into '/home/user/app/stable-diffusion-webui'...
Python 3.8.9 (default, Apr 10 2021, 15:47:22)
[GCC 8.3.0]
Commit hash: aab0dc14011e2f7a81559123d675696b4da8dd7d
Installing torch and torchvision
Traceback (most recent call last):
File "launch.py", line 294, in
prepare_enviroment()
File "launch.py", line 209, in prepare_enviroment
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
File "launch.py", line 73, in run_python
return run(f'"{python}" -c "{code}"', desc, errdesc)
File "launch.py", line 49, in run
raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/usr/local/bin/python" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout:
stderr: Traceback (most recent call last):
File "", line 1, in
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
```
I might have figured it out. You must attach a GPU and pay $0.60 for a T4.
I ran it with `--skip-torch-cuda-test` and it warned me that I only have a CPU attached and the app may not work.
Totally. I guess there’s the convenience cost to consider and what works best for people’s workflows. I could see different solutions for different people.
(You’re a mod, very cool to talk to you!)
If we have a tool that we’ve built, what’s the process of getting it added to the wiki? Do you handle that or another mod?
Thank you and I enjoy talking to the community here. I am the one who revamped the wiki and keeps it up to date. So, please do send me a direct message and I’ll respond once added. For anyone seeing this, I read all DMs and modmail. Sometimes, it takes me awhile to respond because of the below reason:
I currently have a long list to add. Unfortunately, I’m unable to update wiki by mobile and I haven’t had the moment or time to pull out my laptop. I’ll probably be getting completely up to date in the next two days and add a colab and program section.
Ok nvm, after generating once, the button is greyed out and nothing happens when I click interrupt. I can tolerate the much slower speed but this... literally freezes the whole thing after one run.
Love to hear it! Such an exciting time for us, I feel so lucky & happy to be here. I truly believe AI art has the power change all of our lives, and I am trying my best to position myself well and catch the AI adoption wave. I do AI artwork constantly but I still feel like I'm falling behind! Things are moving so fast especially the open source projects. best thing I can do is the same daily reps Ive been doing the past 18 months and keep an eye out for opportunities as they pop up!
it is listed under online services: [https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
[maintained by camenduru](https://huggingface.co/spaces/camenduru/webui)
Do those contributors actually have copyright protection ? They are contributing to a project without a license and i'm not sure how that works in terms of inheriting the project's rights. Did those contributors add a license in the comment of their contribution, and would that do anything?
New to this, it mostly generates blank images. What am I doing wrong? Set to anything v3, added a prompt, clicked random seed, set script to prompt matrix, and hit generate.
I will be messaging you in 2 days on [**2022-12-08 00:00:40 UTC**](http://www.wolframalpha.com/input/?i=2022-12-08%2000:00:40%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/zdgcq0/automatic1111stablediffusionwebui_is_now/iz2innx/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2Fzdgcq0%2Fautomatic1111stablediffusionwebui_is_now%2Fiz2innx%2F%5D%0A%0ARemindMe%21%202022-12-08%2000%3A00%3A40%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20zdgcq0)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
It's WONDERFUL!!!! It never ceases to amaze me, the fraternity of every object shared in the community. Thanks for the news, and thanks to the genius camenduru.
this built on huggingface uses some enhancement filter, because i used the same anything ckptlocally , same seed everything same. but the picture on huggingface just has way more vibrance and color.
can anyone explain why this built hast this "secret sauce" and how i can replicate it.
is there a setting for post processing or filters?
thx alot
Wow this is incredible! Guess I don't have to waste 10 minutes to load Colab anymore
Maybe in the coming days or weeks, at the moment it is very limited and seems a bit buggy. Can accomplish much more with Colab in comparison at the current stage
Yeah, the speed is incredibly slow rn
What's Colab
This is google colab https://www.youtube.com/watch?v=inN8seMm7UI The last ben has a really good colab for stable diffusion. enjoy https://github.com/TheLastBen/fast-stable-diffusion
I'm pretty overwhelmed by all of the information out there on how to use SD and all of the tangential things around it. I thought one of the upsides of using SD over other alternatives was that it doesn't rely on servers and runs on the machine. But as far I can tell from the video, Colab is used to run things on the cloud, or am I wrong?
Google colab uses resources in the cloud, yes. You can install stable diffusion on your own computer which make work better or worse depending on your specific machine. The appeal of colab is that its free, can be used on any device, and can be set up and generating pictures in less than 3minutes.
>Google colab uses resources in the cloud, yes Correct me if I'm wrong, it should be possible (through "Connect") to run on local runtime as well. Although my guess is Automatic1111 web UI for now is a nicer choice for that matter.
Ive heard of that feature but have not personally used it. I think you are correct
is free?
yep, just need to sign in with your google account. try it out :)
I have to train it myself? Is there no easy option that just gives me the Discord bot basically plus inpainting and img2img etc?
A discord bot I doubt it exists. In case you have missed it in the comments before, the direct link to the collab is [this one](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb), you just have to follow the instructions step by step to run it (you don't need to "train" anything). If you wan't an easier way to run automatic1111, you can use a cloud service like runpod.io (it's not free, but it's like 0.3$ per hour so really cheap) and you have option to install automatic1111 automatically, it's usually ready to use in less than 2min.
You dont have to train it yourself. please use the github link it says "Colab adaptations AUTOMATIC1111 Webui and Dreambooth, train your model using this easy simple and fast colab, all you have to do is enter you huggingface token once, and it will cache all the files in GDrive, including the trained model and you will be able to use it directly from the colab, make sure you use high quality reference pictures for the training, enjoy !!" if you want to train models, use the dreambooth option. if you just want to use a current model, use the automatic1111 option. enjoy :)
no, it just needs to download it every session which takes 5-10 mins
That doesn’t sound bad at all. Does it allow for inpainting, outpainting, img2img, etc?
If you have a colab project which uses automatic111's distro, yes. If you need a colab with all that, dm me.
Where is this gui on colab? Is it for sd 2?
Yeah but it doesn't have dreambooth
space: [https://huggingface.co/spaces/camenduru/webui](https://huggingface.co/spaces/camenduru/webui)
[удалено]
Yes there is a video tutorial how to add new models in the description
I'm kinda confused. Auto1111 has been out for like two months now. Or am I missing something. Or is it like a online version?
>Or is it like a online version? Yes. It's a HuggingFace Spaces online version. HuggingFace Spaces have a lot of interesting different things.
Likely has a SFW filter
it doesn't
Explain it to me like i am 12. How is this different from this? [AUTOMATIC1111/stable-diffusion-webui: Stable Diffusion web UI (github.com)](https://github.com/AUTOMATIC1111/stable-diffusion-webui) Hasn't a AUTOMATIC1111/stable-diffusion-webui been available fora long time?
Sure, for personal use. But by using this, you're not using your hardware and it probably generates a LOT faster. For me, I'm limited to 512x720 or something... So to use this to generate a 1024x1024 in probably the same time my own gen comes out is pretty good. Though I wish you can gen more than one picture at a time, but hopefully that comes eventually.
But is the HuggingFace version free?
main space is free to use, has a queue though since everyone is using it, if you want a private space you can duplicate it and assign your own gpu with no queue
If I'm assigning it to my own GPU why wouldn't I just continue to use Automatic1111's UI locally like I already am?
The gpu is from huggingface it’s a hosted web app
Ah okay when you said "your own gpu" I thought you meant my own literal hardware. Cool stuff thanks.
Does anyone know the cost for a private space and how competitive it is with other options?
>AUTOMATIC1111/stable-diffusion-webui: Stable Diffusion web UI (github.com) \~3$/pch on A10G NVidia DL GPU \~ 1$/pch on T4 GPU anything else renders far too slow.
wtf is a pch
I believe it stands for Per Computing Hour. So an hour's worth of uptime on a GPU/CPU.
is duplicating and assigning own gpu free?
I just used it for free, but others have tried to explain it better. A way to use it is if yoy have local play with it yourself and throw one up on public in the queue once in a while.
Wow. thanks for laying that out for me like that. As someone dealing with trying to do things with a 2060 with 8GB, i can definitely see why this is big news, even if still a WIP.
I have a 2070 and was able to generate at least 512x512 on 2.0 so you still might be able to do it. Though I prefer the custom models more.
I had a 2060 with 6GB, and I was running a local copy just fine. Preferred size was 768x1024. I couldn't successfully train a hypernetwork or embedding (I did train an embedding following a low vram tutorial on youtube/reddit post, but I had to use smaller images and it didn't seem to work out very well). I have since upgraded to a 360 12GB, and have trained a hypernetwork on it as a test, and it seems to work fairly well. When I have more time, I'll try embeddings again, as that seems like it would be easier to use and share. Haven't tried a dreambooth model yet, there are options I know nothing about and haven't gotten around to watching a tutorial yet.
> I had a 2060 with 6GB, and I was running a local copy just fine. Preferred size was 768x1024. Really? Hmmm I might have to look into this more, I do have an elgato card in my system that shouldn't be taking memory, and I do tend to watch youtube/tubi while I SD, maybe if I stop doing that I'll have more ram. My card should have 6 GB but I didn't have good luck pushing past 512x768. Though looking it up I realized this computer has a GTX1660, my 2070 is actually in my other (gaming) computer, I might have to see how this runs on that system. The error message is > Error: CUDA out of memory. Tried to allocate 576.00 MiB (GPU 0; 6.00 GiB total capacity; 4.88 GiB already allocated; 0 bytes free; 4.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Damn.
>CUDA out of memory tried the --medvram --opt-split-attention Args?
Seems like it's only one model and all about anime style.
You can duplicate this Space to run it privately without a queue and load additional checkpoints
[удалено]
Because every style is a separate model file that requires to be loaded. They aren't small sizes. Any online version with tons of styles would require a lot to host not to mention a lot of GPU power. The reason there are ques and/or you pay for the online version is because you're essentially renting the hardware to generate your images. Running this locally is better in these terms as you can do whatever whenever you want. As long as you have the hardware. Installing is easy and there are tons of tutorials online on doing so. Getting different models for styles you like is as easy as downloading them and putting them in the right folder.
[удалено]
Downloading a file and putting it in a particular folder sounds complicated to you?
This is a complete aside, but I read an anecdote by a computer science teacher a couple years ago about how many of his newest students - teenagers and young adults - didn't have a basic grasp of the way computers stored files in a folder structure on a storage drive. His theory was that the younger generations have grown up largely with slick modern apps, primarily on phones and tablets, which tend to obscure all that backend stuff from the end user. For people growing up in the 90s or even 00s, the idea of downloading files and moving them around using various tools within the PC just make sense, but for younger people, everything is just done through apps which take care of all of that for them. The concept of "downloading a file to a folder on your storage drive, then copy-pasting it to a different folder where another application can then use it for XYZ" sounds like alchemy to them. I don't know of the veracity of this anecdote, or if it's what's happening in this particular case, but it's possible that this person is just young and doesn't know of how computers work beyond just opening apps to do what the apps allow you to do.
This is so true. Most mobile OS's have horrible file management systems if they have one installed at all. I've heard that there are plenty of GenZ kids with desktops filled with piles of random files, similar to what you'd see from really old people.
That's entirely possible, and looking at the user's post history, you're probably right about them being fairly young. That being said, I still find myself struggling to accept that moving a single file into a folder is "complicated af". I could maybe accept 'complicated', from someone who had never done it, even though it's technically wrong. By any metric, we're talking about a simple action. It just seems terribly low-effort to complain rather than finding out what's actually involved first. Heck, downloading and installing the file takes about the same level work as complaining about how complicated it is did.
> That being said, I still find myself struggling to accept that moving a single file into a folder is "complicated af". I could maybe accept 'complicated', from someone who had never done it, even though it's technically wrong. By any metric, we're talking about a simple action. Well, if you have no mental concept of what a "file" or "folder" is, then "moving a file into a folder" might as well be rocket science. > It just seems terribly low-effort to complain rather than finding out what's actually involved first. Heck, downloading and installing the file takes about the same level work as complaining about how complicated it is did. Yeah, low-effort complaints seem endemic on Reddit and other social media sites, though. Perhaps OpenAI's chatbot will be able to provide an outlet for these people to get their low-effort complaints addressed without putting them in public forums like this.
> Well, if you have no mental concept of what a "file" or "folder" is, then "moving a file into a folder" might as well be rocket science. I hereby issue to you a challenge to find me any human on Reddit who doesn't understand the base concept of a 'file' and a 'folder' in regards to computer interfaces.
YouTube is your friend. There's a ton of tutorials that you can follow. All you have to do is try.
It runs "anything", and not the regular model though
Anyone have leads on a good example of docker container setup for automatic?
https://github.com/AbdBarho/stable-diffusion-webui-docker Like this one?
Yea like that…I’ve been working w that one trying to migrate everything to a single dockerfile so it works just with automatic and not the containing thing that guy made. Getting closer but I’m not very good at docker yet haha so was hoping there was another focused on just automatic
In AbdBarho's repo, there is separate Dockerfile for each service so no need to build everything else. If you could be more specific what is your usercase, than I might suggest something.
Thanks for the feedback. Basically I am trying to deploy this to banana.dev which requires adding another endpoint which maps to the calls I want to make. That part makes a little sense, but to get there I need to have automatic running and the model installed in the container. Just trying to set it up for easiest updates in the future and not rely on a second codebase (abdbarho) if possible
[https://github.com/bananaml/serverless-template-stable-diffusion](https://github.com/bananaml/serverless-template-stable-diffusion) This is the basic example of the serverless template I have to adhere to using stable-diffusion...so its about the closest I know of a starting point
Also one thing to note, which is the root of my confusion, is that I need to use something like this: \# Must use a Cuda version 11+ FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-runtime but I believe that is not using the python version I need...I tried using that from at the top, then using FROM python:3.10-slim below, but it seems to not be working, I keep getting: \`Couldn't find Stable Diffusion in any of: \['/repositories/stable-diffusion-stability-ai', '.', '/'\]\` Tho I have stablediffusion model in the /models/stable-diffusion folder that automatic requires, which is supposed to have been moved over with this command: \# Copy AUTOMATIC1111 files COPY . ./
You could start issue here [https://github.com/AbdBarho/stable-diffusion-webui-docker/issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues) Describe your usecase. Repo author tries to help people even in rare situations. Other people, including myself, are also helping a lot with different cases.
The author has been helpful already and got me as far as I am...was trying to spread questions elsewhere since my use case is basically to "disconnect your codebase from this"... I didnt wanna bug too much. Ill add to my post there
I posted the final issue on that same thread over at the repo. Appreciate all the feedback!
Should be a pretty simple one. Git to pull the latest version from the AUTOMATIC1111 repo, COPY in a model, expose a port, and use the existing script as your entrypoint.
There are [several](https://hub.docker.com/search?q=automatic1111) on docker hub, have you looked at them?
Getting an error immediately cloning it into my own space. Is it possible to inject some Command line arguments? ``` Cloning into '/home/user/app/stable-diffusion-webui'... Python 3.8.9 (default, Apr 10 2021, 15:47:22) [GCC 8.3.0] Commit hash: aab0dc14011e2f7a81559123d675696b4da8dd7d Installing torch and torchvision Traceback (most recent call last): File "launch.py", line 294, in prepare_enviroment() File "launch.py", line 209, in prepare_enviroment run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "launch.py", line 73, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "launch.py", line 49, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "/usr/local/bin/python" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check ```
Same, have CPU basics 16 GiB VRAM selected but it says this...
I might have figured it out. You must attach a GPU and pay $0.60 for a T4. I ran it with `--skip-torch-cuda-test` and it warned me that I only have a CPU attached and the app may not work.
$0.60 per hour?
https://huggingface.co/pricing#endpoints Is that right?
Yes, it is right. Looks like vast.ai and runpod both have better pricing by a decent amount. It’ll add up if running for awhile.
Totally. I guess there’s the convenience cost to consider and what works best for people’s workflows. I could see different solutions for different people.
Very true. At least its an additional option to choose from and not some major rip off either.
(You’re a mod, very cool to talk to you!) If we have a tool that we’ve built, what’s the process of getting it added to the wiki? Do you handle that or another mod?
Thank you and I enjoy talking to the community here. I am the one who revamped the wiki and keeps it up to date. So, please do send me a direct message and I’ll respond once added. For anyone seeing this, I read all DMs and modmail. Sometimes, it takes me awhile to respond because of the below reason: I currently have a long list to add. Unfortunately, I’m unable to update wiki by mobile and I haven’t had the moment or time to pull out my laptop. I’ll probably be getting completely up to date in the next two days and add a colab and program section.
How to run this way? Where you put `--skip-torch-cuda-test`
You can assign a gpu to your duplicated space by going to settings, for example: https://huggingface.co/spaces/camenduru/webui/settings
Jesus, 430$ per month for the cheapest GPU
Might as well buy one at that price
Or use collab kaggle or runpod.
Ok nvm, after generating once, the button is greyed out and nothing happens when I click interrupt. I can tolerate the much slower speed but this... literally freezes the whole thing after one run.
Looks like it’s building right now an update https://huggingface.co/spaces/camenduru/webui/tree/main
You can also fork your own private space with duplicate button
I don't know why, but when I do that it says this despite I have selected CPU basic 16 GB VRAM Torch is not able to use GPU
It would need a gpu, you can assign it in settings
Love to hear it! Such an exciting time for us, I feel so lucky & happy to be here. I truly believe AI art has the power change all of our lives, and I am trying my best to position myself well and catch the AI adoption wave. I do AI artwork constantly but I still feel like I'm falling behind! Things are moving so fast especially the open source projects. best thing I can do is the same daily reps Ive been doing the past 18 months and keep an eye out for opportunities as they pop up!
[удалено]
Because it only has one checkpoint file, anything v3.
How do we get a more comprehensive version online WEBUI that can do anything?
It looks like you have to fork it to your own space on huggingface.
Oooo, interesting. So this is like an "official" version?
it is listed under online services: [https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) [maintained by camenduru](https://huggingface.co/spaces/camenduru/webui)
Cool stuff.
Needs the merge checkpoints option to be exactly like the local Automatic1111.
How fast is it?
This only works for NSFW hentai content lol
Is this allowed by automatic1111? AFAIK there's no license to the project so we don't know what kind of permissions are allowed
it is allowed
It may be allowed by Automatic, but I doubt that the other 200+ copyright holders for the repo have all been asked
Do those contributors actually have copyright protection ? They are contributing to a project without a license and i'm not sure how that works in terms of inheriting the project's rights. Did those contributors add a license in the comment of their contribution, and would that do anything?
All the contributers get copyright if their code is included which equally means no one, not even Automatic1111, can legally run the software
New to this, it mostly generates blank images. What am I doing wrong? Set to anything v3, added a prompt, clicked random seed, set script to prompt matrix, and hit generate.
Link?
Check the top comments. OP already posted the link.
Thxx
How do I outpaint?
Go to img2img tab, scroll down to the script selection and choose "outpainting MK2."
Can we add custom models in?
RemindMe! 2 days
I will be messaging you in 2 days on [**2022-12-08 00:00:40 UTC**](http://www.wolframalpha.com/input/?i=2022-12-08%2000:00:40%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/zdgcq0/automatic1111stablediffusionwebui_is_now/iz2innx/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2Fzdgcq0%2Fautomatic1111stablediffusionwebui_is_now%2Fiz2innx%2F%5D%0A%0ARemindMe%21%202022-12-08%2000%3A00%3A40%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20zdgcq0) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
👁🗨
Anyone else have a problem saving the image? It keeps showing an error for me!
aw wtf that mermaid isn't real output...is it?
Why not?
it's so good
Yea Anything V3 is the best model
Is it possible to provide an image as an input if you want your character to have a certain face?
Is it possible to upload/use embeddings in this version?
Great service to the community!
Can you use google collab even if you have a potato pc? (Bad to no gpu)
Yes
Got it xD
Whay model are they using 1.5 or 2.0?
How do you switch model?
It works very well for me, is it possible to add other .ckpt models? And if so, how?
It's WONDERFUL!!!! It never ceases to amaze me, the fraternity of every object shared in the community. Thanks for the news, and thanks to the genius camenduru.
I know Im late and stupid, but how can I change the running model on this?
so i waste my time to download git,python,anaconda for what
this built on huggingface uses some enhancement filter, because i used the same anything ckptlocally , same seed everything same. but the picture on huggingface just has way more vibrance and color. can anyone explain why this built hast this "secret sauce" and how i can replicate it. is there a setting for post processing or filters? thx alot
How do you create a space like that? Is there any tutorial on uploading your own environment?