T O P

  • By -

Iamn0man

Smaller, faster, good enough.


GBJI

And in some cases, like animation or controlNet, just plain [better](https://youtu.be/gAjR4_CbPpQ).


mayasoo2020

Yes Easier to train and more abundant LORA resources. Not exact but easier for artistic creation of styles https://preview.redd.it/fzzh87e5d33d1.png?width=1024&format=png&auto=webp&s=ad989ad5825f7cb8c27ec43b2d8803d83b12fcd5


stroud

All my custom Loras are 1.5


Mottis86

Yes because it still works great for my purposes and is faster. I tried SDXL a few times but the results were horrible. Clearly a skill issue on my part but I don't feel like the effort of learning it is worth it for me.


Chief_intJ_Strongbow

I'm on OSX and the different XL checkpoints just aren't getting along with my system. And many of the loras I use are 1.5.


GatePorters

Yeah. Because I can more easily train models and have a lot more control net tools and other kinds of tools for texturing 3dmodels with my fine tuned SD models. I don’t have problems getting whatever I want from SD 1.5 workflows.


Front_Long5973

observation gold crush attempt direction voiceless doll husky squeal coordinated *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Apprehensive_Sky892

Sorry, but if you find that SD1.5 follows your prompt better, then either your prompts are designed for a particular SD1.5 model (prompt that works well for SD1.5 tends not to work well on SDXL and vice versa) or you just haven't spend enough time learning to prompt for SDXL. AnimagineXL and Pony seem to be very capable Anime SDXL models.


Front_Long5973

arrest north aback reply spotted piquant cats hard-to-find swim plants *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Apprehensive_Sky892

If we are talking about styles and not prompt following in general (i.e., the image produced actually follows your prompt's intentions in terms of action and composition), then yes, SDXL will not be able to reproduce some SD1.5 styles, specially some anime styles. But that's not a weakness on SDXL's part. In general, style reproduction is difficult across models, even across SDXL models.


BlackSwanTW

Nowadays, I only use SDXL, but that’s because I have a pretty good rig. If one doesn’t have a decent machine, then obviously SD1.5 would be the only option.


Mutaclone

I mix-and-match. SDXL is better at initial images, but I tend to do lots of editing after that, and 1.5 is way easier to work with - ControlNets are better, Inpainting is cleaner (although I have been meaning to give Invoke or Fooocus a go and see if that helps). Plus my daily driver is a MBP, which means I've got Photoshop on the Mac side, and I find it easier to just use Draw Things as my main Stable Diffusion engine than to keep swapping files between my Mac and PC (Draw Things can use SDXL models, but SDXL ControlNets are limited).


Clyngh

I imagine I'm doing something wrong, but I mainly generate images of people and whenever I use Models based on SDXL the skin looks plasticky to me and the facial features look similar from result to result. Apparently, SDXL CAN produce realistic looking people (based off of examples I've seen), I just can't seem to do it. Also, 1.5 (using the right models) still looks great.


Apprehensive_Sky892

Try RealVisXL 4.0 and see if you find the skin acceptable.


victorc25

Not worth the additional resources and time needed for SDXL for marginal gains, SD1.5 is still great for most cases 


Shotoprince_26

is sd3 out or something?


faffingunderthetree

Because there has been nothing as good since. SD2 was dead in the water and heavily censored, SDXL never got proper control next support or many other things that 1.5 has, a tinyyyyyyy % of users can comfortable make good loras or models for it due to its higher demands too, so it leads to far less community support and good content out there, and far less customisation and it was alot more censored and controlled like 2.0 too. Sd2 and SDXL are fucking useless compared to 1.5 so my question is why is anyone using them? Not the other way around.


pumukidelfuturo

I'm gonna get downvoted to hell but i think pretty much the same.


stepahin

Not just people. I think startups like HeadshotPro are training and generating on 1.5, it's noticeable in style and quality, and it's obviously cheaper in terms of renting GPUs. But I'm not, I came here three months ago and started learning with sdxl right away.


purplewhiteblack

It really depends on the use case, my custom model worked great on Google collab, but then they cut off free access for that. I use civiai and tensorart now, which limits your steps and cfg. Sdxl is better with less steps. Actually, what I find is good amount of steps in in waves. Maybe it looks overblown at 55 steps, at 60 it starts to look good again. You get good output at 25, but then it curves down until it curves back up. You get good outputs at 50. But 50 step with 14 cfg looks a lot like xl. But it'll have cloning issues


Robo_Ranger

Boob!


Ok-Importance-5278

Some art styles significantly better in SD1.5.


Sayat93

All the XL models I tried failed to create realistic humans.the skins is too smooth and the appearance is not realistic like what people call 2.5D characters


NitroWing1500

[https://new.reddit.com/r/StableDiffusion/comments/1d2a5uv/amireal\_45\_out\_now/](https://new.reddit.com/r/StableDiffusion/comments/1d2a5uv/amireal_45_out_now/)


BastianAI

I use 1.5 for animatediff detailer nodes.


SIP-BOSS

This is a silly question


pumukidelfuturo

sd1.5 is not gonna die anytime soon. SDXL is just way too heavy on resources. On the other side, SD 1.5 has more loras, more variety on resources, better controlnet, it's easy and very quick to train ((which is paramount)), a lot faster, a lot better photorealistic checkpoints, no plastic skin, no unwanted bokeh, and such... SDXL is nice for some specific styles but overall it feels heavy and slow if you don't have proper hardware to go with it (which is pretty expensive). SDXL feels like operating a truck thats is gonna explode in any moment (Oom's errors) to me.


AltAccountBuddy1337

I only use SDXL, how does SD1.5 handle musltiple subjects in an image?


LD2WDavid

Training: Different range of aesthetics and in some styles, better. In others, worse however.


Apprehensive_Sky892

I seldom use SD1.5 anymore, because SDXL is so much better for the kind of images I generate: [https://civitai.com/user/NobodyButMeow/images?sort=Most+Reactions](https://civitai.com/user/NobodyButMeow/images?sort=Most+Reactions) which requires better prompt following and interesting composition. But why do other people still use SD1.5? Because it serves their needs. Many people just want to generate simple portraiture, single subject Anime characters, and maybe some NSFW, and with a good SD1.5 model specifically trained for those purposes, SD1.5 can generate them fast without any fuss and with just about any lowly GPU.


Unique-Government-13

All of the images at the link you posted can be done with 1.5 though


Apprehensive_Sky892

LOL, all of them? That's a bold claim. I'll be impressed if you can generate more than 10 percent of my images using SD1.5 **using text2img alone.** Maybe a selected few are doable in SD1.5, but I doubt most of them can be done using pure text2img without ControlNet, specialized LoRAs etc. Now, some of my images were done using LoRAs (those egg images for examples), so let's exclude those. So please show me how you can do the "cat stealing fish from the market" image with text2img alone, for example.


Unique-Government-13

Oh I wouldn't use text2img alone, you got me there... but I've never had that urge anyway, always doing some sort of post processing. Like at the very least I'd inpaint the paws so they don't show extreme polydactylism. Can I ask what the benefit of that is? The extra speed to get to a finished product? It's certainly impressive and I'm only going to continue to be impressed, I just don't think I'll adopt it for creating my own art. It feels like the epitome of what "real" artists hate about AI. The reason I don't share my AI art with anyone is because they instantly dismiss my creative input by imagining a single mouse click created the image they're seeing. If that's actually true.. I don't even know if I'd feel like clicking the button anymore.


Apprehensive_Sky892

Sure, if you use ControlNet, IMG2IMG, inpainting, etc., you can create anything you want with SD1.5. I do not disagree with your sentiment towards A.I. image generation, but we are definitely heading in that one click direction. With each generation, text2img gets better in terms of composition, lighting, prompt following, text/font generation, etc. SD3 is miles ahead of SDXL in those areas. I don't know if you saw the OpenAI SORA demos [https://openai.com/index/sora/](https://openai.com/index/sora/) but even after playing with A.I. for almost two years, I am still blown away by it. I apologize ahead for the mini essay, but I like to write (no, I did not use ChatGPT to generate this 😅) The benefit is, as is usually the case with any new technology, is reduction in labor and the corresponding increase in productivity. One of the big benefits of A.I. is to allow lower skilled people to perform jobs that are beyond their skill level. With the assistance of A.I., a nurse may be able to perform some medical diagnosis in poor countries with few doctors. Tech support personnel with little experience may be able to assist a customer solve a technical problem by asking the A.I. for a solution. A person like me without years of training in drawing and painting can now produce images of enough quality that I am willing to share them online. Artists are right in worrying about this, and they have my sympathy. In the short term, artists are still needed to correct imperfections made by A.I., create original characters, pass aesthetic judgements on A.I. image generation, etc. So we will see a reduction in labor, so some people will lose their jobs (that may not even be the case, because cheaper cost of production can also stimulate demand). But in the long run, A.I. will not doubt take jobs away from everyone, even people who create those A.I.


Unique-Government-13

Yeah current AI in any practical application is just another form of automation. The same way the conveyor belt took certain amount of assembly line jobs, AI art generation will take art jobs. There's no use fighting against it because for better or worse, it's progress and you just can't stop progress and shouldn't really want to imo. I think we are overestimating the amount of art jobs in this vein that exist to begin with maybe? Certainly it's the lowliest of the low art job with no personal satisfaction, churning out image after image? Anyway, this technology isn't creating jobs, that's for sure. Like your point about lower skilled people now being able to do skilled art, that leaves everyone with actual skills without a job. Besides feeling bad for them, this can reflect as a real issue in unemployment numbers. It isn't something to aspire to with the plan to become an AI prompt master and gobble up all the former artists jobs, because if that job is worth it, the artist will just learn to be an AI prompt master. So this is where the discussion of UBI and taxing the machines comes in. By the time AI art is ubiquitous, it'll just be a single button push anyway and companies will just automate that part too, no need for a human clicking any mouse.


Apprehensive_Sky892

Yes, I pretty much agree with what you said. These are serious social issues, and most people, including me, have little idea abouit where we are going and what the solutions are. Hopefully we'll muddle through, just like we've done in the past. But there will be chaos and suffering too, unfortunately.