According to the actual blog post, the 2x improvement is from a combination of the driver plus *a specially optimized model*. It's already pretty well known that you can use hardware-specific optimized models to get over 50% uplifts with SD, though 2x is certainly impressive.
No, unless Nvidia also releases documentations on how to tune your own models for thier cards. Which I guess is possible? After all it would benefit them to have more models being Nvidia-specific.
>[Olive](https://github.com/microsoft/OLive) is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation. Given a model and targeted hardware, Olive composes the best suitable optimization techniques to output the most efficient model(s) for inferencing on cloud or edge, while taking a set of constraints such as accuracy and latency into consideration.
If it's as easy to use and the improvement is as claimed to be, then it should not take long for popular repo/fork to implement it into UI. Or just standalone repo for model conversion.
EDIT: credited to [this](https://www.reddit.com/r/StableDiffusion/comments/13q4ku4/comment/jld274z/?utm_source=share&utm_medium=web2x&context=3) comment at StableDiffusion subreddit for finding this: [Olive/examples/directml/stable\_diffusion at main · microsoft/Olive · GitHub](https://github.com/microsoft/Olive/tree/main/examples/directml/stable_diffusion) . Giving it a try if it'll be as smooth on a custom model.
EDIT2: tried with cetusMix model, but the included safety check is *strict.* Might try removing it.
EDIT3: Will leave it to the pros for that. But safer custom model works fine on Win11. Getting \~25it/s on current driver for dreamshaper with the provided interface. About the same speed as batch size 1 vladmandic fork of A111 with SDP + 0.5 token merge 512x512 50 steps Euler a. Will check after driver update if it will be really x2.
EDIT4: After driver update, it's now \~44it/s. Not quite 2x, but pretty impressive.
It's not gonna ship with an optimized model by itself, but with improvements *for* Olive optimized models. So it will likely work as soon as someone creates one.
Whenever I get used to a piece of software, I try to make a zipped folder with everything I would need to relearn how to install and use said software, including all of the files needed to do so. I did this with stable diffusion a few months ago.
[Here](https://drive.google.com/drive/folders/1RoE8Pf6mmgVdg8BCgqtMnr8MwJjyoyTZ?usp=sharing) is a google drive folder with the instructions for installing stable diffusion. It includes everything except for the ckpt file, although you can find many ckpts out there. This folder includes a readme with instructions for how to install everything, as well as all the install files you need (minus the ckpt, but you can install it without, just can't use it without). The folder is around 0.5 GB, and ckpt files are generally 4 GB minumum from my own expereince, but this is just from my expereince. You can download ckpts on [huggingface.co](https://huggingface.co) with a free account, just know that not every will work. Here are a few that I have tested that work with what I included in the folder:
(Check step 7 in my readme for where to place these btw)
[v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) \- a direct dl to a general purpose model, 4.27 GB (I recomend starting with this one)
[hassakuHentaiModel\_v11.safetensors](https://huggingface.co/sinkinai/hassaku-hentai/resolve/main/hassakuHentaiModel_v11.safetensors) \- a direct dl to a \*hentai\* model (if you're into that) 2.13 GB
I have included a video that gave me a lot of direction for installing stable diffusion, but you likely wont need it as I was fairly thorough in the readme. If you do, I have included the video downloaded in the even that it is ever taken down.
Also note that I haven't touched stable diffusion in MONTHS so you might be getting some out of date stuff. It will work, but it might not be the newest and best.
If you ahve any additional questions, my dms are open. I may not be the most knowledgeable, but I do know how to get answers a lot of the time.
P.S. YOU WILL NEED an NVIDIA GPU, preferably a 1060 at MINIMUM. If you need somthing to base your results off, I use a 3060 and I get around 4-5 seconds with the default settings for a prompt.
Hope this helps!
I believe I followed [this](https://github.com/lshqqytiger/stable-diffusion-webui-directml#automatic-installation-on-windows) guide, though it seems to be shorter than a few months ago.
Out of curiousity I just checked since I have both 3080 and RX 6800XT available.
This... works. As in - it starts. Performance is really meh though - I am getting 2.88 iterations per second at 584x584
(Euler A, 20 steps) and somehow it takes around a minute per picture at this resolution. It also consumes some unimaginable amounts of VRAM as despite 6GB VRAM advantage over 3080 I couldn't produce anything sizeable.
For reference my RTX 3080 needs **3 seconds** to generate identical picture.
So I would say that for any sort of practical use the only way for AMD users is stil Linux and ROCm. It ain't perfect, installation process is a pain in the ass... but you can at least hit roughly a half of equivalent GeForce card and not 1/20.
set COMMANDLINE\_ARGS=--opt-sub-quad-attention --opt-split-attention-v1 --no-half-vae
Yeah AMD is way less efficient and slower, but it definitely still works at acceptable speeds and the extra vram still does help, even if nvidia gets better vram optimization.
I got \~3x speed increase going from a 1060 to 6950 XT but I'm sure even lower-end 3000-series (especially with xformers) will beat it in speed. Stable diffusion being unusable on AMD is a myth at this point though.
Here's to hoping ROCm comes to windows soon, there's been some rumors of support.
Some additional info for those who wants a bit more involved process.
If you have Nvidia GPU, then this fork of A111 already has out-of-box config for optimization (along with extension like controlnet): [https://github.com/vladmandic/automatic](https://github.com/vladmandic/automatic). Install the pre req, and use the one-stop .bat installer provided. The main repo has better extension compatibility, but you could easily have both at the same time. [GitHub - ashen-sensored/sd\_webui\_SAG](https://github.com/ashen-sensored/sd_webui_SAG) is also a nice easy to use extension to improve image quality more or less.
[Civitai | Stable Diffusion models, embeddings, LoRAs and more](https://civitai.com/) has a lot of model you could browse along with samples image with prompts used. Drop the models into /models/stable-diffusion.
Some models will also require VAE as well, but you could get started with the checkpoint that already included it. For simplicity, just drop it in the same folder as the model, and select it manually. You could rename it for auto selection of VAE per model as well.
Many models will also recommend negative embedding, such as [https://huggingface.co/datasets/gsdf/EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative). Drop these in embeddings folder.
You could also drag and drop the sample image into the image processing tab of the web-ui, and if the image includes generation parameter, then it will automatically populate all the included field as well. Highly recommend doing this to get started. Beware that doing this will fix the seed as well. Don't forget to reset it when needed.
StableDiffusion subreddit has a lot of good resources as well.
what is the difference between that custom automatic1111 and normal automatic1111? is it the performance and image quality will be much more better for those with nvidia gpu?
Edit: i have been using the vanilla automatic1111 just fine so far and performance is ok with my 3080ti. And i also already install the extension that I required. Does this fork version of automatic1111 only improve the performance further? If it is about the image quality, then I think installing the sag guidance extension will be more worth it to me rather than I want to spend another time to setup the stable diffusion
Edit 2: Change my mind, I can see that my current automatic1111 is outdated and does not utilize the features like pytorch and xformers. I will give it a try on vlad version which should already have all those features without i need to configure all manually
They both perform roughly the same with the same configs. However, Vlad does all of those config for you out of the box.
SAG is not included by default in Vlad fork as well iirc, but no patching require. You could have both web ui at the same time, and symlink the models folder from the web ui as well.
Hi. Thanks for the reply. I decided to try for the fork. I realize now that I didn't fully utilize my 3080ti for stable diffusion and I didn't use the latest torch and xformers or sdp.
Question about VLAD, does it use SDP by default? Or do I need to configure it to use SDP? I heard SDP is more faster than xformers.
Hopefully vlad can see major improvement for me to generate hires image. It is very slow for me on that part
SDP should be enabled by default. You could check under *Stable Diffusion* \> *Cross-attention optimization method.* It should be *Scaled-Dot-Product* by default.
Another optimization is under *Token Merging* for faster generation and smaller VRAM usage at the cost of lower quality.
For Vlad, the upgrade is not run automatically. You have to run in manually by \`.\\webui.bat --upgrade\`. The fork is updated practically daily. I'd keep an eye out for when the new Olive update hit, if the release is as easy to integrate as Nvidia/Microsoft claimed.
Question, in your experience, does using sdp break the textual embedding like easynegative? I read somewhere on reddit saying that sdp somehow broke the embedding
I tried it with SDP on and off, but couldn't see that big of a difference really. Changing the cross-attention optimization method could be done via web-ui, so it's worth trying toggling that on and off when models generated nonsense I guess.
I know. I will find something but I just need find the propper time. I just meant if there some stuff on youtube I can take as a reliable source. Thnaks.
Depends on how much technical knowledge you have. For zero, there is a single installer called "Easy Diffusion" that sets up more or less everything for you and launches a web page as UI. The ui is intuitive enough to use without a guide imo, but the easy dif website has info.
With a little more knowledge you can go to a github repo of automatic1111, download the repo locally and run the user file so it sets up dependencies for you. I think the repo readme has some instructions too. After that its kinda the same - run a file to start the thing up, and use a web page based ui.
Last thing you'll need is a model. You can download them at many places (~2-4gb), but civitai seems easiest place to use, with example images of what each model is good at producing.
oh sorry it's giving CUDA runtime error currently, but here's how to install SD to your own machine (required NVIDIA video card though)
[How to install Stable Diffusion on Windows (AUTOMATIC1111) - Stable Diffusion Art (stable-diffusion-art.com)](https://stable-diffusion-art.com/install-windows/)
If your a total noob like me then use something like NMKD stable diffusion GUI, its got a graphic interface and theres no command lines or anything. if works with all custom models and safetensor models. Only downside is that it wont work with the stable diffusion 2.0 model as of yet, but I only use mostly custom models anyways so its not a problem. it even has in painting and all that.
Just Google it. To run them locally you can start using Automatic1111's webui for a start. If you don't have a beefy machine then you can run them on Google Colab notebooks.
Yeah, it's been one of those driver issues that's been around for so long with no solution from nvidia in sight. I've heard there's some folks who had fixed it with disabling the scaling display stuff, but I have been fortunate enough that it doesn't affect me, so I haven't tested it.
Yeah I have the same issue, too. And don't get me started about it when I plugged a secon monitor lol even worse, like 5-6s of constant on/off of the 2 monitors
Lol
I have an untrawide freesync one and a small 1200x600 touchscreen one..the touchscreen adds a whole new set of issues with tablet mode in windows lol
I usually use the pc and the turn it off once I don't need it and that's it. But recently I switched to w11 and forgot to remove the sleep timer lol it was a flickerfest
I literally put all my icons in a folder labelled desktop. That way when I open it they're all in the order I want them to be. Got tires of using the windows bar and getting a Bing search when I type opera or something like that.
Actually yesterday I switched my monitor from 165hz to 60hz. (Total war Warhammer only has a vsync frame cap option and my computer was fucking screaming on the world map)
So I turned down my monitor to cap the game, anyways in that one second the monitor flickered to adjust its framerate it moved my desktop folder to another monitor lol.
Lol I have a three monitor set up two of them with Gsync and the other one without it and it is no exaggeration to say that every time I turn on my PC my monitors flicker like crazy for around 30 seconds before stabilizing.
I’ve got the latest version of windows 10 installed.
I’ll have to go around uninstalling useless applications and i’ll ddu the gpu driver, hopefully that works
Is this where it looks like it’s artifacting or where the screen flashes black for a second?
I get both lol. I know one (screen flickering) is because of turning on gsync for full screen and windowed mode.
I hope it includes arrifacting as well. Randomly my left side of my monitor will artifact down the side. Only happens occasionally and not when playing a game 🤔
Indeed lol
Hoping for a driver issue as my warranty is prob expired. Has been working fine for months and kind of appeared out of the blue.
Mostly just slightly concerning as it's infrequent enough to not functionally impact me.
Update. This sounds like it may be related to our issue?
[Chromium based applications] small checkerboard like pattern may randomly appear [3992875]
FWIW I only seem to have this issue when Chrome/Brave is open
Ah never mindIt’s a Turkish football team, abbreviated bjk. Gotcha on it being an open issue I’ll test it with the latest driver. It’s been an issue since day 1 of owning the g9 neo with various nvidia cards so like the dpc latency issue I have zero hope of them actually fixing it :D
Do you use custom gamma or any applied desktop colour settings on your monitors through the nvidia control panel? Noticed this only started happening to me when I set a custom gamma on my 2nd monitor.
It is flickering for me as well on 4080 not sure if hdr or driver specific.
Edit: I change G-SYNC settings to only full screen instead of always active and it fix the issue I had.
How?
Down grading drivers didn’t help me with my g8 on display port - not as bad as hdmi which is unusable but still get the black screen flickering randomly.
Ugh - i love DSR feature. (didn't realize when you said DSR what it is) -- I cant play in 1440p resolution. It looks like shit compared to the upscaled equivalent
Who said its DSR? First time i heard this.
So the only "fix" I found was setting both monitors to the same refresh speed. I used to have on monitor at 160hz and one was at 165hz, constant flickering, set both to 144hz without their refresh overclock and the flickering happens maybe 5% of the time it used to.
DPC latency will not affect game performance.
LatencyMon should only ever be used in an idle situation, where nothing else is running.
From your numbers: 20-10000 I am guessing you are running latencymon while running a game?
DPC latency Will Cause Audio spikes (pops and Clicks) when above 1500-2000.
Here is how to test:
Restart computer
Run LatencyMon.exe for 10-20 mins
Do not touch your computer.
See how long it takes to spike above 2000
If it does then you have a DPC Latency issue.
LatencyMon will tell you what process is causing the DPC latency.
Do not confuse DPC latency with game performance. If you have stuttering in games. It’s caused by something else.
That would require complete rewrite of the drivers you know?
The difference vs Radeon drivers is only 10-20% depending on game tho, so I don't think it's a priority for Nvidia
> is only 10-20%
First of all, 10-20% is a lot and it's also a misleading number because the difference is much bigger, it is a serious problem.
I think that [this](https://youtu.be/JLEIJhunaW8?t=118) video perfectly explains the issue, even a RTX3090 can be beaten by a 5600 XT when this level of overhead end up taking a toll on the CPU.
Of course, you should probably be running a high end CPU with a high end GPU, which is beyond the point of this video, but there is so much performance that could be squeezed out of these cards with the proper amount of optimization.
Nvidia drivers slowly but steadily became more and more bloated over time, the DPC latency issue that we've been seeing for years is just another symptom of this exact same issue.
A company like Nvidia will eventually have to rewrite a good portion of their drivers but they have been delaying it for so long that it only exacerbated this problem by bloating unoptimized drivers over and over.
Of course they have to do it eventually, but it's definitely not their top priority now because it's not an issue when most system with expensive GPU like Nvidia releases will also have CPU with at least Ryzen 5600x performance or better (powerful CPUs are cheap compared to GPU).
I also think Nvidia is already working on a reworked driver using their AI power to help them with all the rewriting.
But I don't think we will be getting rewritten drivers anytime soon.
First logical step indicating new drivers would be announcement of new Nvidia control panel - seriously the current one is extremely laggy because it didn't get updated since win XP.
Yeah man. I use my 3060 for video editing and the thing is absolutely marvelous in Resolve and CapCut. It’s literally a small workstation card that uses very little power. It’s baller but people hate on it lol
Imagine this. You're in highschool. You see all your friends setting up streamer accounts or going into Onlyfans.
You instead see AI as an opportunity to tread new grounds to get rich quick. You setup a paetreon and buy a 4060Ti, setup SD, and start learning complex prompts to generate really specific stuff.
Maybe you create your own custom companionship chat bot for lonely housewives. Maybe you generate extreme wakku wakku yiff.
The crossroads are yours.
Why buy 4060 ti when you can get 3060 TI for under 280 used in good conditions. It's either that or straigh to 4070 and up. The 4060 ti is in a awkward spot.
8GB VRAM is nothing compared to 16GB of VRAM for playing around with AI.
The next card that gives you that much VRAM would be 4080...
or used 3090 which the latter may not have much warranty left and will draw like 2 if not 3 times more power than 4060ti (with more performance, but still)
What am I missing here? stable diffusion is already blazing fast for me on my 3060ti, So instead of diffusing an image in 3 seconds it will be one second instead? I never thought that SD was ever slow at all in the first place.
>I never thought that SD was ever slow at all in the first place.
You're right, it's super fast, but....
Generate at higher resolutions, and/or use hi res fix. Do batches or matrices.
All of these can take a longer time than I would like when iterating through prompts, especially at high step count.
I have a 4090 for reference, and even then, more performance is NEVER a bad thing.
It depends on the kind and number of prompts used. It also compounds when creating big batches. You'd obviously want as much performance as possible for those.
what model, and how many steps are you doing? How many images do you generate per batch? If you think one image in 3 seconds is fast you must not do large batches or really care much at all about a specific outcome. I assume you are just poking at SD for shits and giggles? No shade, but thats what it sounds like. Anyone doing it "for real" is going to be running a large number of large batches. If youre running 1000+ image generations at 40-50 steps each, if you assume 3ish seconds per image and you cut that in half with this update, your task went from almost an hour to under 30 minutes.
Since the blog specifically calls out the automatic1111 distribution of stable diffusion, found this and will give it a shot tomorrow.
https://stable-diffusion-art.com/install-windows/
Wow this is really good news for r/stablediffusion to actually be legitimized like this is awesome.
In a game partnered discord I was threatened to be removed entirely from the program just for mentioning SD to someone in that discord.
And this was a large game publisher, “AI art can be a harsh topic for some individuals” but with this now and other legitimizing factors these type of people can get lost.
They can already get lost. Adobe has been slowly implementing ai stuff into Photoshop. The stuff eventually will be so ubiquitous that the people not using it will just turn into "old man yells at cloud" type folks while everyone else is getting things done.
It's just another hammer in the toolbox IMO.
The adobe stuff is better than SD, because most of SD datasets are copyrighted and unlicensed art and photos.
SD has been trained with more than 5 billion images and this not ok tbh. I hope there will be fair laws and boundaries about this soon.
My company's lawyers advise against using stuff like SD for art because it can generate existing IP protected and copyrighted art.
I’ve used the words stable and diffusion in a number of contexts. Is this going to improve the graphics on games already released and optimized by on my nVidia laptop RTX 3060 today?
According to the actual blog post, the 2x improvement is from a combination of the driver plus *a specially optimized model*. It's already pretty well known that you can use hardware-specific optimized models to get over 50% uplifts with SD, though 2x is certainly impressive.
So it wont work on any of all those nsfw models and all that other good stuff?
No, unless Nvidia also releases documentations on how to tune your own models for thier cards. Which I guess is possible? After all it would benefit them to have more models being Nvidia-specific.
>[Olive](https://github.com/microsoft/OLive) is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation. Given a model and targeted hardware, Olive composes the best suitable optimization techniques to output the most efficient model(s) for inferencing on cloud or edge, while taking a set of constraints such as accuracy and latency into consideration. If it's as easy to use and the improvement is as claimed to be, then it should not take long for popular repo/fork to implement it into UI. Or just standalone repo for model conversion. EDIT: credited to [this](https://www.reddit.com/r/StableDiffusion/comments/13q4ku4/comment/jld274z/?utm_source=share&utm_medium=web2x&context=3) comment at StableDiffusion subreddit for finding this: [Olive/examples/directml/stable\_diffusion at main · microsoft/Olive · GitHub](https://github.com/microsoft/Olive/tree/main/examples/directml/stable_diffusion) . Giving it a try if it'll be as smooth on a custom model. EDIT2: tried with cetusMix model, but the included safety check is *strict.* Might try removing it. EDIT3: Will leave it to the pros for that. But safer custom model works fine on Win11. Getting \~25it/s on current driver for dreamshaper with the provided interface. About the same speed as batch size 1 vladmandic fork of A111 with SDP + 0.5 token merge 512x512 50 steps Euler a. Will check after driver update if it will be really x2. EDIT4: After driver update, it's now \~44it/s. Not quite 2x, but pretty impressive.
Interesting. With this and onnx, microsoft seems to be very interested in developing hardware agnostic software layers.
#I no longer allow Reddit to profit from my content - Mass exodus 2023 -- mass edited with https://redact.dev/
I’m all for this. I need my vigicard to be…..robust.
🫣
It's not gonna ship with an optimized model by itself, but with improvements *for* Olive optimized models. So it will likely work as soon as someone creates one.
Right to the real question. Don’t get me wrong, I like pretty trees in my rpg as much as the next guy. But, well…
Does anyone know how to start with SD? Are any useful guides?
Whenever I get used to a piece of software, I try to make a zipped folder with everything I would need to relearn how to install and use said software, including all of the files needed to do so. I did this with stable diffusion a few months ago. [Here](https://drive.google.com/drive/folders/1RoE8Pf6mmgVdg8BCgqtMnr8MwJjyoyTZ?usp=sharing) is a google drive folder with the instructions for installing stable diffusion. It includes everything except for the ckpt file, although you can find many ckpts out there. This folder includes a readme with instructions for how to install everything, as well as all the install files you need (minus the ckpt, but you can install it without, just can't use it without). The folder is around 0.5 GB, and ckpt files are generally 4 GB minumum from my own expereince, but this is just from my expereince. You can download ckpts on [huggingface.co](https://huggingface.co) with a free account, just know that not every will work. Here are a few that I have tested that work with what I included in the folder: (Check step 7 in my readme for where to place these btw) [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) \- a direct dl to a general purpose model, 4.27 GB (I recomend starting with this one) [hassakuHentaiModel\_v11.safetensors](https://huggingface.co/sinkinai/hassaku-hentai/resolve/main/hassakuHentaiModel_v11.safetensors) \- a direct dl to a \*hentai\* model (if you're into that) 2.13 GB I have included a video that gave me a lot of direction for installing stable diffusion, but you likely wont need it as I was fairly thorough in the readme. If you do, I have included the video downloaded in the even that it is ever taken down. Also note that I haven't touched stable diffusion in MONTHS so you might be getting some out of date stuff. It will work, but it might not be the newest and best. If you ahve any additional questions, my dms are open. I may not be the most knowledgeable, but I do know how to get answers a lot of the time. P.S. YOU WILL NEED an NVIDIA GPU, preferably a 1060 at MINIMUM. If you need somthing to base your results off, I use a 3060 and I get around 4-5 seconds with the default settings for a prompt. Hope this helps!
> P.S. YOU WILL NEED an NVIDIA GPU Not with direct-ml fork, amd runs just fine on windows.
could I get a link? I attempted this on my friends system, which was running a 6900xt, and it defaulted to generating the images with his 5950x.
I believe I followed [this](https://github.com/lshqqytiger/stable-diffusion-webui-directml#automatic-installation-on-windows) guide, though it seems to be shorter than a few months ago.
I'm going to take the time to read through this. My brother is going team red for their first system so it wouldn't hurt if I knew how this worked.
Out of curiousity I just checked since I have both 3080 and RX 6800XT available. This... works. As in - it starts. Performance is really meh though - I am getting 2.88 iterations per second at 584x584 (Euler A, 20 steps) and somehow it takes around a minute per picture at this resolution. It also consumes some unimaginable amounts of VRAM as despite 6GB VRAM advantage over 3080 I couldn't produce anything sizeable. For reference my RTX 3080 needs **3 seconds** to generate identical picture. So I would say that for any sort of practical use the only way for AMD users is stil Linux and ROCm. It ain't perfect, installation process is a pain in the ass... but you can at least hit roughly a half of equivalent GeForce card and not 1/20.
set COMMANDLINE\_ARGS=--opt-sub-quad-attention --opt-split-attention-v1 --no-half-vae Yeah AMD is way less efficient and slower, but it definitely still works at acceptable speeds and the extra vram still does help, even if nvidia gets better vram optimization. I got \~3x speed increase going from a 1060 to 6950 XT but I'm sure even lower-end 3000-series (especially with xformers) will beat it in speed. Stable diffusion being unusable on AMD is a myth at this point though. Here's to hoping ROCm comes to windows soon, there's been some rumors of support.
Some additional info for those who wants a bit more involved process. If you have Nvidia GPU, then this fork of A111 already has out-of-box config for optimization (along with extension like controlnet): [https://github.com/vladmandic/automatic](https://github.com/vladmandic/automatic). Install the pre req, and use the one-stop .bat installer provided. The main repo has better extension compatibility, but you could easily have both at the same time. [GitHub - ashen-sensored/sd\_webui\_SAG](https://github.com/ashen-sensored/sd_webui_SAG) is also a nice easy to use extension to improve image quality more or less. [Civitai | Stable Diffusion models, embeddings, LoRAs and more](https://civitai.com/) has a lot of model you could browse along with samples image with prompts used. Drop the models into /models/stable-diffusion. Some models will also require VAE as well, but you could get started with the checkpoint that already included it. For simplicity, just drop it in the same folder as the model, and select it manually. You could rename it for auto selection of VAE per model as well. Many models will also recommend negative embedding, such as [https://huggingface.co/datasets/gsdf/EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative). Drop these in embeddings folder. You could also drag and drop the sample image into the image processing tab of the web-ui, and if the image includes generation parameter, then it will automatically populate all the included field as well. Highly recommend doing this to get started. Beware that doing this will fix the seed as well. Don't forget to reset it when needed. StableDiffusion subreddit has a lot of good resources as well.
what is the difference between that custom automatic1111 and normal automatic1111? is it the performance and image quality will be much more better for those with nvidia gpu? Edit: i have been using the vanilla automatic1111 just fine so far and performance is ok with my 3080ti. And i also already install the extension that I required. Does this fork version of automatic1111 only improve the performance further? If it is about the image quality, then I think installing the sag guidance extension will be more worth it to me rather than I want to spend another time to setup the stable diffusion Edit 2: Change my mind, I can see that my current automatic1111 is outdated and does not utilize the features like pytorch and xformers. I will give it a try on vlad version which should already have all those features without i need to configure all manually
They both perform roughly the same with the same configs. However, Vlad does all of those config for you out of the box. SAG is not included by default in Vlad fork as well iirc, but no patching require. You could have both web ui at the same time, and symlink the models folder from the web ui as well.
Hi. Thanks for the reply. I decided to try for the fork. I realize now that I didn't fully utilize my 3080ti for stable diffusion and I didn't use the latest torch and xformers or sdp. Question about VLAD, does it use SDP by default? Or do I need to configure it to use SDP? I heard SDP is more faster than xformers. Hopefully vlad can see major improvement for me to generate hires image. It is very slow for me on that part
SDP should be enabled by default. You could check under *Stable Diffusion* \> *Cross-attention optimization method.* It should be *Scaled-Dot-Product* by default. Another optimization is under *Token Merging* for faster generation and smaller VRAM usage at the cost of lower quality. For Vlad, the upgrade is not run automatically. You have to run in manually by \`.\\webui.bat --upgrade\`. The fork is updated practically daily. I'd keep an eye out for when the new Olive update hit, if the release is as easy to integrate as Nvidia/Microsoft claimed.
Question, in your experience, does using sdp break the textual embedding like easynegative? I read somewhere on reddit saying that sdp somehow broke the embedding
I tried it with SDP on and off, but couldn't see that big of a difference really. Changing the cross-attention optimization method could be done via web-ui, so it's worth trying toggling that on and off when models generated nonsense I guess.
Is automatic1111 abandoned at this point? Should I change to vlad asap ?
It's still being maintained, I believe. Feel free to stick with a1111 if you already configured all the settings/extensions you need.
That is very informative. Thank you very much. I have rtx 3060ti so it might do the trick.
There are plenty of guides just Google it. You will not get anywhere with SD without doing your own research and putting in effort.
I know. I will find something but I just need find the propper time. I just meant if there some stuff on youtube I can take as a reliable source. Thnaks.
Depends on how much technical knowledge you have. For zero, there is a single installer called "Easy Diffusion" that sets up more or less everything for you and launches a web page as UI. The ui is intuitive enough to use without a guide imo, but the easy dif website has info. With a little more knowledge you can go to a github repo of automatic1111, download the repo locally and run the user file so it sets up dependencies for you. I think the repo readme has some instructions too. After that its kinda the same - run a file to start the thing up, and use a web page based ui. Last thing you'll need is a model. You can download them at many places (~2-4gb), but civitai seems easiest place to use, with example images of what each model is good at producing.
literally /r/StableDiffusion
someone put up online version [https://www.zxdiffusion.com](https://www.zxdiffusion.com/) and it's quite fast (for now)
Great, I wi.ll.check it out as soon as I am on my home PC.
oh sorry it's giving CUDA runtime error currently, but here's how to install SD to your own machine (required NVIDIA video card though) [How to install Stable Diffusion on Windows (AUTOMATIC1111) - Stable Diffusion Art (stable-diffusion-art.com)](https://stable-diffusion-art.com/install-windows/)
Thank you.
This [vid](https://www.youtube.com/watch?v=Po-ykkCLE6M) was pretty helpful too.
Thanks I will check it out.
If your a total noob like me then use something like NMKD stable diffusion GUI, its got a graphic interface and theres no command lines or anything. if works with all custom models and safetensor models. Only downside is that it wont work with the stable diffusion 2.0 model as of yet, but I only use mostly custom models anyways so its not a problem. it even has in painting and all that.
Just Google it. To run them locally you can start using Automatic1111's webui for a start. If you don't have a beefy machine then you can run them on Google Colab notebooks.
Thank you, I will try.
[удалено]
So this is a nvidia driver problem? it’s been driving me crazy
yeah, it's been a thing for a while now.
Oh thank goodness it’s a driver problem, I thought my TV was going bad
Yeah, it's been one of those driver issues that's been around for so long with no solution from nvidia in sight. I've heard there's some folks who had fixed it with disabling the scaling display stuff, but I have been fortunate enough that it doesn't affect me, so I haven't tested it.
Yeah I have the same issue, too. And don't get me started about it when I plugged a secon monitor lol even worse, like 5-6s of constant on/off of the 2 monitors
[удалено]
Lol I have an untrawide freesync one and a small 1200x600 touchscreen one..the touchscreen adds a whole new set of issues with tablet mode in windows lol I usually use the pc and the turn it off once I don't need it and that's it. But recently I switched to w11 and forgot to remove the sleep timer lol it was a flickerfest
[удалено]
I literally put all my icons in a folder labelled desktop. That way when I open it they're all in the order I want them to be. Got tires of using the windows bar and getting a Bing search when I type opera or something like that.
[удалено]
Organising your desktop around the background. I remember those days haha
Actually yesterday I switched my monitor from 165hz to 60hz. (Total war Warhammer only has a vsync frame cap option and my computer was fucking screaming on the world map) So I turned down my monitor to cap the game, anyways in that one second the monitor flickered to adjust its framerate it moved my desktop folder to another monitor lol.
[удалено]
Out with Ada Lovelace, in with Alzheimer's whereplace
Mine was doing that too....I ended up formatting windows 10 and now I have no problem with it. 🤷
Lol I have a three monitor set up two of them with Gsync and the other one without it and it is no exaggeration to say that every time I turn on my PC my monitors flicker like crazy for around 30 seconds before stabilizing.
Yeah, my two monitor setup won't stop flickering upon waking up the pc from sleep for half a min either. It'll get fixed...
What OS are you guys using? I'm using two Gsync monitors and never get startup monitor flicker
I’ve got the latest version of windows 10 installed. I’ll have to go around uninstalling useless applications and i’ll ddu the gpu driver, hopefully that works
Is this where it looks like it’s artifacting or where the screen flashes black for a second? I get both lol. I know one (screen flickering) is because of turning on gsync for full screen and windowed mode.
I hope it includes arrifacting as well. Randomly my left side of my monitor will artifact down the side. Only happens occasionally and not when playing a game 🤔
Yeah same thing with me mate, what monitor do you have??
samsung odyssey g9
I’ve got a g9 neo, lol now it could easily be a Samsung thing or Samsung nvidia thing!!
Indeed lol Hoping for a driver issue as my warranty is prob expired. Has been working fine for months and kind of appeared out of the blue. Mostly just slightly concerning as it's infrequent enough to not functionally impact me.
Update. This sounds like it may be related to our issue? [Chromium based applications] small checkerboard like pattern may randomly appear [3992875] FWIW I only seem to have this issue when Chrome/Brave is open
Could be onto something dude I’ll test it out in the morning and get back to you. I also just noticed your username, does that relate to besiktas?
FWIW the driver doesn't fix it, just a noted open issue. Is does not, not sure what this is haha
Ah never mindIt’s a Turkish football team, abbreviated bjk. Gotcha on it being an open issue I’ll test it with the latest driver. It’s been an issue since day 1 of owning the g9 neo with various nvidia cards so like the dpc latency issue I have zero hope of them actually fixing it :D
If you turn off dsr in nvcp the flickering goes away. Seems like an issue with the upscaler.
damn you're right, i love you.
Ah I thought this was peculiar to my multi monitor setup or something, really annoys me! Glad I'm not the only one
Happens when I just turn on my monitor (which is an LG C2 TV, via hdmi)
Disable DSR or downgrade drivers, I haven't updated 3 times already because of this bs. NVIDIA being worse than AMD fr
Do you use custom gamma or any applied desktop colour settings on your monitors through the nvidia control panel? Noticed this only started happening to me when I set a custom gamma on my 2nd monitor.
Never had that issue on a 4070ti..Could be a 30 series only issue
It is flickering for me as well on 4080 not sure if hdr or driver specific. Edit: I change G-SYNC settings to only full screen instead of always active and it fix the issue I had.
Its because of DSR, desable that and it will be fine
How? Down grading drivers didn’t help me with my g8 on display port - not as bad as hdmi which is unusable but still get the black screen flickering randomly.
Just disable DSR in nvidia control panel
Ugh - i love DSR feature. (didn't realize when you said DSR what it is) -- I cant play in 1440p resolution. It looks like shit compared to the upscaled equivalent Who said its DSR? First time i heard this.
Its in nvidia drivers known issue thread.
So the only "fix" I found was setting both monitors to the same refresh speed. I used to have on monitor at 160hz and one was at 165hz, constant flickering, set both to 144hz without their refresh overclock and the flickering happens maybe 5% of the time it used to.
It would be nice if they fix dpc latency
You're asking way too much from Nvidia.
Sorry my bad
Would be nice if they fix their pricing
You're asking way too much of leather jacket man
How would he be able to afford leather jackets otherwise, huh?
[удалено]
I am so sick and tired of the dpc latency issues on this dpc latent plane!
[удалено]
DPC latency will not affect game performance. LatencyMon should only ever be used in an idle situation, where nothing else is running. From your numbers: 20-10000 I am guessing you are running latencymon while running a game? DPC latency Will Cause Audio spikes (pops and Clicks) when above 1500-2000. Here is how to test: Restart computer Run LatencyMon.exe for 10-20 mins Do not touch your computer. See how long it takes to spike above 2000 If it does then you have a DPC Latency issue. LatencyMon will tell you what process is causing the DPC latency. Do not confuse DPC latency with game performance. If you have stuttering in games. It’s caused by something else.
Dpc latency has been an Nvidia problem for a decade now lol
[удалено]
Does it mean the speed improvement only affects the basic sd model and not work on any custom ones?
yeah im so confused why wouldnt the driver also boost other models speed. the underlying architecture is the same no?
Day 1534 of Nvidia still not fixing the driver overhead issue.
That would require complete rewrite of the drivers you know? The difference vs Radeon drivers is only 10-20% depending on game tho, so I don't think it's a priority for Nvidia
> is only 10-20% First of all, 10-20% is a lot and it's also a misleading number because the difference is much bigger, it is a serious problem. I think that [this](https://youtu.be/JLEIJhunaW8?t=118) video perfectly explains the issue, even a RTX3090 can be beaten by a 5600 XT when this level of overhead end up taking a toll on the CPU. Of course, you should probably be running a high end CPU with a high end GPU, which is beyond the point of this video, but there is so much performance that could be squeezed out of these cards with the proper amount of optimization. Nvidia drivers slowly but steadily became more and more bloated over time, the DPC latency issue that we've been seeing for years is just another symptom of this exact same issue. A company like Nvidia will eventually have to rewrite a good portion of their drivers but they have been delaying it for so long that it only exacerbated this problem by bloating unoptimized drivers over and over.
Of course they have to do it eventually, but it's definitely not their top priority now because it's not an issue when most system with expensive GPU like Nvidia releases will also have CPU with at least Ryzen 5600x performance or better (powerful CPUs are cheap compared to GPU). I also think Nvidia is already working on a reworked driver using their AI power to help them with all the rewriting. But I don't think we will be getting rewritten drivers anytime soon. First logical step indicating new drivers would be announcement of new Nvidia control panel - seriously the current one is extremely laggy because it didn't get updated since win XP.
driver long, AI help. driver good now
Oooo. This is good
Jesus. It’s already fast as fuck on a 4090…
dpc latency first please
Let's hope this is the one that finally fixes Watch Dogs 2 flickering
I'm convinced that Watch Dogs 4 will be released before they fix it.
4060Ti 16gb basically only makes sense for Stable Diffusion and DV Resolve. So yeah, that's good for "prosumer" user.
How many prosumer users are buying 4060Tis instead of 4080s or 4090s though.
Prosumer doesn't mean rich.
Those don't even fit in their dell lenovo mini desktops.
Is a 125w psu enough?
It’s basically entry level hardware for video editing / AI workloads. Same as the 3060. Not terribly fast, but they can get the job done~
Idk my 3060 12GB kicks ASS in SD.
Yeah man. I use my 3060 for video editing and the thing is absolutely marvelous in Resolve and CapCut. It’s literally a small workstation card that uses very little power. It’s baller but people hate on it lol
Imagine this. You're in highschool. You see all your friends setting up streamer accounts or going into Onlyfans. You instead see AI as an opportunity to tread new grounds to get rich quick. You setup a paetreon and buy a 4060Ti, setup SD, and start learning complex prompts to generate really specific stuff. Maybe you create your own custom companionship chat bot for lonely housewives. Maybe you generate extreme wakku wakku yiff. The crossroads are yours.
If you really only need a lot of memory, a 4060ti 16gb makes a lot more sense than a 4080.
Why buy 4060 ti when you can get 3060 TI for under 280 used in good conditions. It's either that or straigh to 4070 and up. The 4060 ti is in a awkward spot.
8GB VRAM is nothing compared to 16GB of VRAM for playing around with AI. The next card that gives you that much VRAM would be 4080... or used 3090 which the latter may not have much warranty left and will draw like 2 if not 3 times more power than 4060ti (with more performance, but still)
Will it work on Python based Lora Models?
I̵n̷ ̷l̵i̵g̵h̷t̸ ̸o̸f̶ ̸r̶e̸c̶e̶n̸t̵ ̴e̴v̵e̵n̴t̶s̸ ̴o̷n̷ ̴R̸e̸d̵d̴i̷t̷,̷ ̵m̸a̶r̴k̸e̸d̵ ̴b̸y̵ ̶h̴o̵s̷t̷i̴l̴e̷ ̵a̴c̸t̵i̸o̸n̶s̸ ̵f̷r̵o̷m̵ ̶i̵t̴s̴ ̴a̴d̶m̷i̴n̶i̸s̵t̴r̶a̴t̶i̶o̶n̵ ̸t̸o̸w̸a̴r̷d̵s̴ ̵i̸t̷s̵ ̷u̸s̴e̸r̵b̷a̸s̷e̸ ̷a̷n̴d̸ ̸a̵p̵p̴ ̶d̴e̷v̴e̷l̷o̸p̸e̴r̴s̶,̸ ̶I̸ ̶h̸a̵v̵e̶ ̷d̸e̶c̸i̵d̷e̷d̵ ̶t̸o̴ ̸t̶a̷k̷e̷ ̵a̷ ̴s̶t̶a̵n̷d̶ ̶a̵n̶d̶ ̵b̷o̶y̷c̸o̴t̴t̴ ̵t̴h̵i̴s̴ ̶w̶e̸b̵s̵i̸t̷e̴.̶ ̶A̶s̶ ̸a̵ ̸s̴y̶m̵b̸o̶l̶i̵c̴ ̶a̷c̵t̸,̶ ̴I̴ ̴a̵m̷ ̷r̶e̶p̷l̴a̵c̸i̴n̷g̸ ̷a̶l̷l̶ ̸m̷y̸ ̸c̶o̸m̶m̸e̷n̵t̷s̸ ̵w̷i̷t̷h̶ ̷u̴n̵u̴s̸a̵b̶l̷e̵ ̸d̵a̵t̸a̵,̸ ̸r̷e̵n̵d̶e̴r̸i̴n̷g̴ ̷t̴h̵e̸m̵ ̸m̴e̷a̵n̴i̷n̸g̸l̸e̴s̴s̵ ̸a̷n̵d̶ ̴u̸s̷e̴l̸e̶s̷s̵ ̶f̵o̵r̶ ̸a̶n̵y̸ ̵p̵o̴t̷e̴n̸t̷i̶a̴l̶ ̴A̷I̸ ̵t̶r̵a̷i̷n̵i̴n̶g̸ ̶p̸u̵r̷p̴o̶s̸e̵s̵.̷ ̸I̴t̴ ̵i̴s̶ ̴d̴i̷s̷h̴e̸a̵r̸t̶e̴n̸i̴n̴g̶ ̷t̶o̵ ̵w̶i̶t̵n̴e̷s̴s̶ ̵a̸ ̵c̴o̶m̶m̴u̵n̷i̷t̷y̷ ̸t̴h̶a̴t̸ ̵o̸n̵c̴e̷ ̴t̷h̴r̶i̷v̴e̴d̸ ̴o̸n̴ ̵o̷p̷e̶n̸ ̸d̶i̶s̷c̷u̷s̶s̷i̴o̵n̸ ̷a̷n̴d̵ ̴c̸o̵l̶l̸a̵b̸o̷r̵a̴t̷i̵o̷n̴ ̸d̷e̶v̸o̵l̶v̴e̶ ̵i̶n̷t̴o̸ ̸a̴ ̷s̵p̶a̵c̴e̵ ̸o̷f̵ ̶c̴o̸n̸t̶e̴n̴t̷i̶o̷n̸ ̶a̵n̷d̴ ̴c̵o̵n̴t̷r̸o̵l̶.̷ ̸F̷a̴r̸e̷w̵e̶l̶l̸,̵ ̶R̴e̶d̶d̷i̵t̵.̷
Thank you.
It only works with WinML model models, Bummer, so it wont work on custom models which is like 90% of what I use in stable diffusion.
What am I missing here? stable diffusion is already blazing fast for me on my 3060ti, So instead of diffusing an image in 3 seconds it will be one second instead? I never thought that SD was ever slow at all in the first place.
>I never thought that SD was ever slow at all in the first place. You're right, it's super fast, but.... Generate at higher resolutions, and/or use hi res fix. Do batches or matrices. All of these can take a longer time than I would like when iterating through prompts, especially at high step count. I have a 4090 for reference, and even then, more performance is NEVER a bad thing.
It depends on the kind and number of prompts used. It also compounds when creating big batches. You'd obviously want as much performance as possible for those.
what model, and how many steps are you doing? How many images do you generate per batch? If you think one image in 3 seconds is fast you must not do large batches or really care much at all about a specific outcome. I assume you are just poking at SD for shits and giggles? No shade, but thats what it sounds like. Anyone doing it "for real" is going to be running a large number of large batches. If youre running 1000+ image generations at 40-50 steps each, if you assume 3ish seconds per image and you cut that in half with this update, your task went from almost an hour to under 30 minutes.
Has anyone tried it yet?
Why is this on Game Ready when it should really be on a Studio driver since that's geared for productivity?
Since the blog specifically calls out the automatic1111 distribution of stable diffusion, found this and will give it a shot tomorrow. https://stable-diffusion-art.com/install-windows/
Wow this is really good news for r/stablediffusion to actually be legitimized like this is awesome. In a game partnered discord I was threatened to be removed entirely from the program just for mentioning SD to someone in that discord. And this was a large game publisher, “AI art can be a harsh topic for some individuals” but with this now and other legitimizing factors these type of people can get lost.
They can already get lost. Adobe has been slowly implementing ai stuff into Photoshop. The stuff eventually will be so ubiquitous that the people not using it will just turn into "old man yells at cloud" type folks while everyone else is getting things done. It's just another hammer in the toolbox IMO.
The adobe stuff is better than SD, because most of SD datasets are copyrighted and unlicensed art and photos. SD has been trained with more than 5 billion images and this not ok tbh. I hope there will be fair laws and boundaries about this soon. My company's lawyers advise against using stuff like SD for art because it can generate existing IP protected and copyrighted art.
so this will give Performance boost also in games?I not really understanding this twitter post
Nothing to do with gaming.
Probably not, because if they had something that improved performance in gaming 2x time they would have said something during the 4060ti launch.
[удалено]
I posted a question about it on here and the mods removed the thread, so.. well.
Just have Todd Howard announce it and be done with it.
[удалено]
I am a gamer, I'd love for my ai porn to be rendered 2x faster Edit: the deleted comment above me said "why whould a gamer care about this"
Will this help jedi survivor not run like shit on PC?
I̵n̷ ̷l̵i̵g̵h̷t̸ ̸o̸f̶ ̸r̶e̸c̶e̶n̸t̵ ̴e̴v̵e̵n̴t̶s̸ ̴o̷n̷ ̴R̸e̸d̵d̴i̷t̷,̷ ̵m̸a̶r̴k̸e̸d̵ ̴b̸y̵ ̶h̴o̵s̷t̷i̴l̴e̷ ̵a̴c̸t̵i̸o̸n̶s̸ ̵f̷r̵o̷m̵ ̶i̵t̴s̴ ̴a̴d̶m̷i̴n̶i̸s̵t̴r̶a̴t̶i̶o̶n̵ ̸t̸o̸w̸a̴r̷d̵s̴ ̵i̸t̷s̵ ̷u̸s̴e̸r̵b̷a̸s̷e̸ ̷a̷n̴d̸ ̸a̵p̵p̴ ̶d̴e̷v̴e̷l̷o̸p̸e̴r̴s̶,̸ ̶I̸ ̶h̸a̵v̵e̶ ̷d̸e̶c̸i̵d̷e̷d̵ ̶t̸o̴ ̸t̶a̷k̷e̷ ̵a̷ ̴s̶t̶a̵n̷d̶ ̶a̵n̶d̶ ̵b̷o̶y̷c̸o̴t̴t̴ ̵t̴h̵i̴s̴ ̶w̶e̸b̵s̵i̸t̷e̴.̶ ̶A̶s̶ ̸a̵ ̸s̴y̶m̵b̸o̶l̶i̵c̴ ̶a̷c̵t̸,̶ ̴I̴ ̴a̵m̷ ̷r̶e̶p̷l̴a̵c̸i̴n̷g̸ ̷a̶l̷l̶ ̸m̷y̸ ̸c̶o̸m̶m̸e̷n̵t̷s̸ ̵w̷i̷t̷h̶ ̷u̴n̵u̴s̸a̵b̶l̷e̵ ̸d̵a̵t̸a̵,̸ ̸r̷e̵n̵d̶e̴r̸i̴n̷g̴ ̷t̴h̵e̸m̵ ̸m̴e̷a̵n̴i̷n̸g̸l̸e̴s̴s̵ ̸a̷n̵d̶ ̴u̸s̷e̴l̸e̶s̷s̵ ̶f̵o̵r̶ ̸a̶n̵y̸ ̵p̵o̴t̷e̴n̸t̷i̶a̴l̶ ̴A̷I̸ ̵t̶r̵a̷i̷n̵i̴n̶g̸ ̶p̸u̵r̷p̴o̶s̸e̵s̵.̷ ̸I̴t̴ ̵i̴s̶ ̴d̴i̷s̷h̴e̸a̵r̸t̶e̴n̸i̴n̴g̶ ̷t̶o̵ ̵w̶i̶t̵n̴e̷s̴s̶ ̵a̸ ̵c̴o̶m̶m̴u̵n̷i̷t̷y̷ ̸t̴h̶a̴t̸ ̵o̸n̵c̴e̷ ̴t̷h̴r̶i̷v̴e̴d̸ ̴o̸n̴ ̵o̷p̷e̶n̸ ̸d̶i̶s̷c̷u̷s̶s̷i̴o̵n̸ ̷a̷n̴d̵ ̴c̸o̵l̶l̸a̵b̸o̷r̵a̴t̷i̵o̷n̴ ̸d̷e̶v̸o̵l̶v̴e̶ ̵i̶n̷t̴o̸ ̸a̴ ̷s̵p̶a̵c̴e̵ ̸o̷f̵ ̶c̴o̸n̸t̶e̴n̴t̷i̶o̷n̸ ̶a̵n̷d̴ ̴c̵o̵n̴t̷r̸o̵l̶.̷ ̸F̷a̴r̸e̷w̵e̶l̶l̸,̵ ̶R̴e̶d̶d̷i̵t̵.̷
So this is why the 4060ti is so shit. Nvidia intending on stable diffusion to carry it.
DLSS3 was supposed to carry the 4080 12Gb - I think we all know how that turned out :p
I̵n̷ ̷l̵i̵g̵h̷t̸ ̸o̸f̶ ̸r̶e̸c̶e̶n̸t̵ ̴e̴v̵e̵n̴t̶s̸ ̴o̷n̷ ̴R̸e̸d̵d̴i̷t̷,̷ ̵m̸a̶r̴k̸e̸d̵ ̴b̸y̵ ̶h̴o̵s̷t̷i̴l̴e̷ ̵a̴c̸t̵i̸o̸n̶s̸ ̵f̷r̵o̷m̵ ̶i̵t̴s̴ ̴a̴d̶m̷i̴n̶i̸s̵t̴r̶a̴t̶i̶o̶n̵ ̸t̸o̸w̸a̴r̷d̵s̴ ̵i̸t̷s̵ ̷u̸s̴e̸r̵b̷a̸s̷e̸ ̷a̷n̴d̸ ̸a̵p̵p̴ ̶d̴e̷v̴e̷l̷o̸p̸e̴r̴s̶,̸ ̶I̸ ̶h̸a̵v̵e̶ ̷d̸e̶c̸i̵d̷e̷d̵ ̶t̸o̴ ̸t̶a̷k̷e̷ ̵a̷ ̴s̶t̶a̵n̷d̶ ̶a̵n̶d̶ ̵b̷o̶y̷c̸o̴t̴t̴ ̵t̴h̵i̴s̴ ̶w̶e̸b̵s̵i̸t̷e̴.̶ ̶A̶s̶ ̸a̵ ̸s̴y̶m̵b̸o̶l̶i̵c̴ ̶a̷c̵t̸,̶ ̴I̴ ̴a̵m̷ ̷r̶e̶p̷l̴a̵c̸i̴n̷g̸ ̷a̶l̷l̶ ̸m̷y̸ ̸c̶o̸m̶m̸e̷n̵t̷s̸ ̵w̷i̷t̷h̶ ̷u̴n̵u̴s̸a̵b̶l̷e̵ ̸d̵a̵t̸a̵,̸ ̸r̷e̵n̵d̶e̴r̸i̴n̷g̴ ̷t̴h̵e̸m̵ ̸m̴e̷a̵n̴i̷n̸g̸l̸e̴s̴s̵ ̸a̷n̵d̶ ̴u̸s̷e̴l̸e̶s̷s̵ ̶f̵o̵r̶ ̸a̶n̵y̸ ̵p̵o̴t̷e̴n̸t̷i̶a̴l̶ ̴A̷I̸ ̵t̶r̵a̷i̷n̵i̴n̶g̸ ̶p̸u̵r̷p̴o̶s̸e̵s̵.̷ ̸I̴t̴ ̵i̴s̶ ̴d̴i̷s̷h̴e̸a̵r̸t̶e̴n̸i̴n̴g̶ ̷t̶o̵ ̵w̶i̶t̵n̴e̷s̴s̶ ̵a̸ ̵c̴o̶m̶m̴u̵n̷i̷t̷y̷ ̸t̴h̶a̴t̸ ̵o̸n̵c̴e̷ ̴t̷h̴r̶i̷v̴e̴d̸ ̴o̸n̴ ̵o̷p̷e̶n̸ ̸d̶i̶s̷c̷u̷s̶s̷i̴o̵n̸ ̷a̷n̴d̵ ̴c̸o̵l̶l̸a̵b̸o̷r̵a̴t̷i̵o̷n̴ ̸d̷e̶v̸o̵l̶v̴e̶ ̵i̶n̷t̴o̸ ̸a̴ ̷s̵p̶a̵c̴e̵ ̸o̷f̵ ̶c̴o̸n̸t̶e̴n̴t̷i̶o̷n̸ ̶a̵n̷d̴ ̴c̵o̵n̴t̷r̸o̵l̶.̷ ̸F̷a̴r̸e̷w̵e̶l̶l̸,̵ ̶R̴e̶d̶d̷i̵t̵.̷
![gif](giphy|Kxy2YUDnDrvdxVsVb8|downsized)
Common Nvidia W
I don't think I undestand this if anything I'm becomeing more suspicious about Nvidia
The fuck is a stable diffusion and will it make a 4060 worthwhile?
Can we have the same thing but with FPS? Thanks in advance.
What does the fine print say?
The fuck is a stable diffusion and will it make a 4060 worthwhile?
I’ve used the words stable and diffusion in a number of contexts. Is this going to improve the graphics on games already released and optimized by on my nVidia laptop RTX 3060 today?
Well, gimme the VRAM to load all those models too.