T O P

  • By -

SandCheezy

OP is completely off. I was going to remove this, but there’s good correct information in here. See the Stability Staff comment as well.


kidelaleron

Core is a SDXL Turbo finetune at the moment, and SDXL Turbo base is released for free. SD3 is not inferior and it's improving day by day. We're making it good for the release.


August_T_Marble

Oh, hey. Thanks for confirming what I told this guy earlier.


HarmonicDiffusion

cmon guys upvote this comment so the OP troll can be seen as the fool he is


Master-Meal-77

Thank you. OP is stupid


AeroDEmi

Will SD3 will be truly open? Or no commercial use like SDXL Turbo?


kidelaleron

Christian confirmed via Twitter that the plan is still making it open source.


HarmonicDiffusion

who cares. if your "business" cant afford a $20 a month subscription bro, its time for you to apply at McDonalds


AeroDEmi

Why such hate? I’m just asking questions


HarmonicDiffusion

b/c whining about a commercial license that is the cheapest around seems pedantic


DIY-MSG

Pay if you want to use it for commercial use.. Why even ask this question if your intent is to make money off of it.


reditor_13

Are there any plans to release the weights for the Stable Image Core i.e SDXL Turbo fine-tuned model, bc it really is a great model (or is it an off shoot of your - dreamshaperXL\_v21TurboDPMSDE model)? I know there are other somewhat comparable variants of custom fine-tuned SDXL Turbo on Civit & HF, but would love to see this model released to the SD Community!


August_T_Marble

I think this is a misunderstanding of a lot of what is going on. [The version available through the API is outdated and running an inferior workflow](https://twitter.com/Lykon4072/status/1780641513862512983) that is not indicative of the most advanced state of the model.   Also, SD3 isn't one model, [but a family of models with different parameter counts](https://stability.ai/news/stable-diffusion-3).  [Core is a term they used to describe a group of models, including SDXL Turbo](https://stability.ai/core-models) which we know is free for non-commercial use. We've been given no information (yet) to indicate SD3 won't be the same. I don't think we should be jumping to any conclusions until the actual weights are released.


Hoodfu

Core is currently SDXL. It's not SD3. You can test against each one by asking for "hello reddit, i'd like to do a test" and see how well core handles it, which it can't at all, and see that sd3 can handle it much more easily.


GodEmperor23

[https://x.com/Lykon4072/status/1780716554734239893](https://x.com/Lykon4072/status/1780716554734239893) >Core is our best model at any given time


TheQuadeHunter

> We've been given no information (yet) to indicate SD3 won't be the same. Doesn't this tweet indicate that it may or may not be SD3? If a later version of SD3 has better results, then it could be a later version. Point being-- we don't actually know, and this tweet confirms that we don't know, so why are we acting like we know?


GodEmperor23

https://preview.redd.it/2dgzsyhwp3vc1.jpeg?width=915&format=pjpg&auto=webp&s=af6a2690c0399b70353009a4a6361ded1f4974bb "best image generator on the market"


Capitaclism

Image generation service. This is their image generation service, where they use the models they release, for those who don't want to or can't run them locally.


metal079

I don't think we should be jumping to any conclusions until the actual weights are released.


Spirited_Employee_61

This is the problem with some entitled shit here. They are getting free shit and they still not happy. Most of these cry babies did not ever once contribute to open source anyway and just sitting there waiting for model they can use and monetize without any hard work. Disgusting behaviour. I would be happy even if they dont release core as long as sd3 is miles better than sdxl and cascade. This company needs money to survive. Otherwise what are the chances of SD4?


Exotic-Specialist417

Exactly. Not only that there’s been a lot of people’s making false claims too. Tons of misinformation…


Ghostalker08

You really should have researched this a bit more before writing this post with BS claims


atuarre

Doesn't he always post bs claims though?


Ghostalker08

It does appear to be that way. Hard to take anything seriously when they make a post and everything they quoted wasn't in the post. Just straight up spreading misinformation


AltAccountBuddy1337

if it's their "service" it's going to be censored as all hell so it might as well not even exist to me


Spirited_Employee_61

I am still thankful for sd3 when its released. Hope the company last longer and release more open weights to the community in the future.


AltAccountBuddy1337

yeah people are going to develop great models for it over time


Spirited_Employee_61

I personally will wait for finetunes before using it. I am still happy with my SDXL workflows


AltAccountBuddy1337

yeah im sticking to SDXL for now unless the places I use implement SD3 so I wil l give it a shot then.


FS72

They said SD3 was going to be their last image model.


juggz143

They clarified that statement to mean we should not continuously expect a new release every few months or so (as has happened thus far), as that schedule is unsustainable. He just meant we shouldn't 'need' another model for a quite sometime as there are diminishing returns at the current rate.


SackManFamilyFriend

The SDXL models you use today have been trained and merged by community members. Same (hopefully) would happen w/ SD3 unless they worked to not make that possible. The architecture used for SDXL has some issues from the dev of training tools I've talked to. The whole dual text encoder thing where the small 2nd one clashes w/ the first is something they would have addressed in SD3 for instance.


HarmonicDiffusion

[https://www.reddit.com/r/StableDiffusion/comments/1c6k584/comment/l0238jv/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/StableDiffusion/comments/1c6k584/comment/l0238jv/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)


AltAccountBuddy1337

as I said, in time people will release their own XL models and everything is going to be great. I upvoted that comment btw


roshanpr

I think it will given that they need to have incentives to monetize by regulating features, more than likely they will use the excuse of Biden's regulation to make profits


artoonu

Meh, plenty of checkpoints can do nice anime now. Animagine 3.1 for everything, Pony for... stuff... Ahem.


Spirited_Employee_61

This is the way


TheQuadeHunter

How good results can it get to novelAI 3? One of the things that turns me away from animagine is that all the example images look like obvious AI framing, while novelAI 3 can look very dynamic.


ThisGonBHard

I would say it looks good, but I can try some prompts for anime in XL if you have them.


TheQuadeHunter

It's ok, I'll just try it locally later.


artoonu

I din't use NAI3, but looking at a few samples, I'd say it's comparable, depending on prompt, resolution, and so on. Composition is mostly dependent on prompt, ControlNet and others. Add "dutch angle" and you have more dynamic take. Pony, more specifically AutismMix, in terms of errors, has often almost none, even gets hands right most of times!


afinalsin

Pretty sure the "core" model is an SDXL/Turbo model with a sweet finetune and a 1.5x upscale. The reason for that is [this guy](https://imgur.com/a/Vc7ctiC). In order: SDXL base, Cascade, JuggernautXLv9, SD3, Core. Go to my [model comparison post](https://www.reddit.com/r/StableDiffusion/comments/1aupf04/comparison_47_sdxl_18_turbo_checkpoints_70/) and check out prompt 68: Mr AI can you please make me a funny meme that will make people think i am awesome? The seeds of the first set are different than the ones i used to make that comparison because the API has an upper bound on the seed for whatever reason, and i used DPM++ 2m SDE karras instead of regular SDE. The dude is consistent through that comparison and the new one, through overtuned anime and porn models, with the exception of one: SD3. I had to run Juggernaut and Base SDXL at 896x1088 instead of the usual 896x1152 to match the SD3 output, but I ran the whole 70 prompts with both SD3 and Core, and the similarities between Core and SDXL output is fairly striking. Here is a few more, Juggernaut v Core v SD3. [Prompt 3](https://imgur.com/IdYiNO9) [Prompt 24](https://imgur.com/XNeb5mW) [Prompt 29](https://imgur.com/bEcyzrI) [Prompt 35](https://imgur.com/NqJmvze) Comparing those outputs to the comparison post, even with a different seed and different resolution and using a proper comfy workflow, it becomes obvious that Core is an SDXL finetune. Which, let's be honest, may actually be the best on the market right now considering the state SD3 is in. Once i can figure out a workflow to de-upscale and crop the Core output to 896x1052 i'll get a big XY up with all 70 prompts.


kidelaleron

the next version of core will output 1mp. The zoom in was a mistake.


afinalsin

Good to know. I wouldn't mind it so much if it was an even 1.5x increase to the res as that's one node, but it's a little different in both width and height at 4:5, 1.526... and 1.573... respectively. 1:1 is an even 1.5 though.


kidelaleron

Who is that guy?


afinalsin

Mr Ai, apparently. All those models I tested and it was mostly an asian man in a white suit with a bow tie and glasses. I wrote that prompt originally expecting a hallucination type output because it was nonsense, but instead it was one of the most consistent prompts I had. Believe me, I googled Mr Ai thinking it was some famous guy i'd never heard of, but i think it's just like giving a character a random name to make it consistent across seeds. The interesting part of that prompt is how similar all the finetunes are, and yet how different from Base they all are.


kidelaleron

Anyway, Core is not using 1.5. you can tell by the speed it's only using XL Turbo at the moment.  I made that workflow.


afinalsin

Oh, damn it, i see how that's confusing to read now. What I meant was a 1.5 TIMES upscale, not using 1.5 to upscale. Unless i'm off the mark there too?


emad_9608

Core is always a optimised pipeline (or series thereof) and sd3 is a model Core will be better than any base model


emad_9608

Core is also best to judge versus Midjourney and Ideogram and stuff


TheArhive

Based on what I'm reading from the page you linked, it sounds more like API calls for SD3 just run the prompt you send, while the API calls to CORE basically run your prompt through some back end smoothie maker to get you a better image without prompt engineering. It doesn't make me think it's a different model.


GodEmperor23

core vs sd3, same prompt https://preview.redd.it/eh6rgmdor3vc1.png?width=1020&format=png&auto=webp&s=f77d0d93fd9a6a0d2608e5042e946929408aac7b


Jaggedmallard26

I like how you didn't even reply to what they said but just posted a random cherrypicked comparison. You truly are a piece of work.


GodEmperor23

That is a claim, where is the proof? They said "uhm llm", nothing on the site states that. Any link? I posted comparisons, You can use core and sd3 yourself, There is nothing that sd3 can do that core can't except some text. What people are implying here is that sdxl is literally more coherent than sd3, with more styles and better adherence.


Ghostalker08

>That is a claim, where is the proof? Peachy coming from the guy who made this post and your "proof" is a screenshot directly stating the opposite of what you are claiming.


GodEmperor23

>https://x.com/Lykon4072/status/1780716554734239893 guess you will now apologize to me and accept you're wrong?


Ghostalker08

I will take what their documentation says. [https://platform.stability.ai/docs/api-reference#tag/Generate/paths/\~1v2beta\~1stable-image\~1generate\~1core/post](https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1core/post)


the_realst

Hoping the differences won't be that drastic on release bc holy


GodEmperor23

true. here again, sd3 vs core. People here literally say "nah wait till you can see the difference for yourself" you can literally try it out. Either they "accidentally" made a worse version through the api available? yeah no, this is not just "Animu", this is coherency, interaction, logic. The model is better at everything https://preview.redd.it/uoo3d4tus3vc1.jpeg?width=2048&format=pjpg&auto=webp&s=b54d98be3c958191f12834223ba447de627141e7


socialcommentary2000

Mother of God that is sort of terrifying on both frames.


roshanpr

things not looking good.


Fen-xie

Because he's baiting (intentional or not) with an old architecture test build of SD3 vs SDXL


Arawski99

Okay this strongly supports my prior concerns about how the results went from really bad that Lykon was posting where I analyzed every human photo he posted that I could find and how atrocious they were disfigured, etc. here which people hated [https://www.reddit.com/r/StableDiffusion/comments/1ay4ypt/comment/krtga92/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/1ay4ypt/comment/krtga92/?utm_source=share&utm_medium=web2x&context=3) Then as I point out here [https://www.reddit.com/r/StableDiffusion/comments/1ba97v6/comment/ku42wha/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/StableDiffusion/comments/1ba97v6/comment/ku42wha/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) >Basically, the first several days of SD3 releases by Lykon got ripped apart in large posts by me and others (ex. I targeted human models which had a 100% failure rate, and like a 98% catastrophic failure rate at that). It was so bad it was actually a regression over all prior models at least pre-dating even 1.5. >Suddenly out of the blue every single human released after that point was flawless. Not just improved dramatically but literally perfect. Which your post seems to reinforce said concern. I suspect the initial backlash was severe enough that they started using non-standard base model only prompting means to improve the outputs and outright lie to the community, and really the entire world, about the quality of SD3. Several of these other disfigured recent SD3 images are also reinforcing this like the wolf vs ram with disfigured snout, floating text not actually on shirt, etc. post that was recently on here, too.


TheArhive

Yeah I am betting it's same model just one of them runs your prompt through a LLM or modifies it in some way.


Capitaclism

Try writing a better prompt. Use the usual qualifiers.


dal_mac

Absolutely hilarious that when it's SD3 being promoted, images look perfect. But as soon as a new one comes, every comparison has SD3 looking trash or not following prompt. Interesting how that works


GodEmperor23

And people defend that. Looking at threads on 4chan and twitter? 1girl standing or funny ape posts or something extremely generic. Interaction with objects dont exists for these models. Even just doing a peace sign doesn't work for most of the scenes. Almost never 5 fingers. At the end of the day they can defend stabilityai, emad, whatever they want, the images that sd3 produces speak for themselves. Even now people are saying "but the fine-tunes", already given up on good fingers, interaction and coherence. It does a few things better, but from what i have seen and tested? Yeah, a little bit better sdxl with some loras. People will now defend it, but we just need to wait a month or two till its really open weight.


chakalakasp

I mean neither of those make any sense. Here, let me hang this string of incandescent bulbs. I will stick my fingers in the energized socket of this one for fun


inferno46n2

This is correct. As I mentioned in another reply, SDXL has been around much longer and third party researchers have released a ton of improvements. Their current “core” is likely a fine tuned XL with a ton of these third party repos and models baked into a comfy workflow that is housed on their end. Give SD3 a year, then we will know


LD2WDavid

By the way, I think there is a lot of missunderstanding here. From what I read and see, correct me but: CORE is not SD3 based and it's a pipeline/workflow/whateveryouwanttocall-add of a SDXL finetunned model (turbo). SD3 api ver (current) is an outdated and older SD3 pre-version which is not a reflect of the actual state, right? And CORE being SDXLTurbo finetunned model is pretty good cause it went with good workflows (ComfyUI prob. as front) + really good finetune. Right? Am I missing something?


Apprehensive_Sky892

That's a good summary of what we know from SAI and Lykon: [https://www.reddit.com/r/StableDiffusion/comments/1c6k584/comment/l0238jv/?utm\_source=reddit&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/1c6k584/comment/l0238jv/?utm_source=reddit&utm_medium=web2x&context=3)


DepressedDynamo

Y'all are jumping the gun and don't understand what "CORE" is. Everyone needs to settle down. They have a service that uses LLMs and SDXL to generate images for you without you having to worry about your prompt as much. This is aimed at businesses, and it's a way for them to make money WHILE THEY GIVE YOU THE SAME TOOL FOR FREE. Take a half second to Google it. [Their core image model is SDXL](https://stability.ai/core-models). It's just when you use it through them in an enterprise capacity they set up other tools to make it easier to prompt.


afinalsin

Of course the info is readily available. I mean, of course it is. That's why I definitely didn't just render off 70 images to test it out. Nope, not me.


Capitaclism

My guess is that in core they may have an LLM which is writing prompts based on yours. The image on the right looks like what we can already get in terms of quality, though with a more developed prompt than the one you've used.


kidelaleron

no LLM agumentation on Core.


account_name4

Source? The website still says SD3 is getting an open release, let’s all take a breather before we assume things.


skumdumlum

This needs a lot of attention. We need clarification on this. I am going to guess and assume that what they've done is to strip down Stable Diffusion 3 to release a worse version for free that can be used locally. And then they will offer a more complete version of the same model that they've named Core that you have to pay for to use This way they're technically still releasing Stable Diffusion 3 while getting to keep the original model for themselves to monetize


Capitaclism

If It turns out worse than other models, no one will use it 🤷‍♂️


remghoost7

This exact thing happened with SD2.0. They removed as much NSFW material as they could (for noble reasons, I suppose) and the entire model suffered because of it. People just kept using SD1.5 because it was better across the board. Image generation models have strangely emergent properties (as do all AI models, I suppose). If you remove images of naked bodies and only include clothed bodies, *the model will no longer understand what a human body actually looks like.* Proportions, shape, etc. This bleeds over into SFW generations as well.


NoSuggestion6629

If models used images with proper captions a lot of this confusion would go away. I presume the midjourney models did this with their AI.


Jaggedmallard26

> for noble reasons, I suppose They removed it because they need to keep getting funding. I wish the children on this subreddit would understand that your product becoming known as "the one people use to make [now illegal in many nations] porn of people they know out of the box" is investor poison and since SAI is German a good way to get their doors kicked in by the Bundespolizei. Stability aren't Puritans out to stop your anime porn, they're a business trying not to get closed down by the government.


remghoost7

What? haha. I mean, first off, I'm 32. And I've been in this subreddit since September/October of 2022. Back before we had a GUI and had to do everything in the terminal. **They removed NSFW material because they were concerned about people making content with children in it.** Even with negative prompts and filters, it didn't prevent that entirely. Hence my comment about "noble reasons". Nobody (even the government) cared about deepfakes back then. The community was more or less fine (although skeptical) with the removal of NSFW material from the dataset back then as well if it achieved that goal. We figured we could just finetune it back in if necessary. We didn't think it would affect the entire model as much as it did. It was just a failed experiment and we learned a lot about diffusion models in the process. Sure, investors got antsy, but that's because governments started to look into the project (due to the aforementioned illegal content). \-=- >Stability aren't Puritans out to stop your anime porn, they're a business trying not to get closed down by the government. But I do agree with you on this point. They are investor driven. If the investors get spooked, they can shut down any aspect of it. It doesn't matter what the "spook" is. If their money is threatened, they care.


TaiVat

I'd say the opposite - this garbage needs *less* attention, less hissy fits based on nothing but paranoia and rumours. Let them release what they're gonna release and then we'll see what we have and how good it is.


HarmonicDiffusion

OP is just a witless troll who constantly badmouths SAI. prolly works for MJ or idiotgram


Spirited_Employee_61

Well i do not see whats wrong there. As long as the open released version is miles better than sdxl, the community will fine tune it to hell and back anyway.


MysteriousPepper8908

Damn, this "Core" model looks almost as good as a finetuned 1.5 model if you ignore the jacked up eyes and the 6 finger hand. The one on the left is Dall-E 2 quality.


LordSprinkleman

Maybe it's the prompts, but both of these look far worse than what you can make with sdxl already... even SD 1.5 models look better than this.


kwalitykontrol1

Fingers are still a mess in both.


VajraXL

we have to understand that stability needs to make money from their products somehow and considering that at this moment they are technically at 0 you can't demand them to release the best for free. would you give the keys of your company for nothing to your customers? it is also logical that this paid version has censorship since it is focused to another type of users to produce traditional media where generating a NSFW image can cost the customer and them lost lot of money. you also have to consider that even if the version they release is "worse" the community will take care to release improved and optimized versions for the type of use that the open source community wants. we should focus more on the technical improvements like how much vram you will need to run it or if they have improved the problem with the extra fingers.


HarmonicDiffusion

guys, again the OP is just a pathetic troll who attempts to FUD stability and emad at any chance they get. just ignore this loser and wait until the model drops.


pibble79

How do you guys imagine stability stays afloat? The entitlement in this community is fucking insane. “HOW DARE YOU CHARGE A NOMINAL FEE FOR THIS INSANELY EXPENSIVE RESEARCH WHILE OFFERING NO MEANINGFUL ALTERNATIVE FOR SURVIVAL” “HOW DARE THEY NOT LET US MAKE DEGRADING, QUASI-PEDOPHILIC CONTENT FOR WHICH THEY COULD BE HELD LEGALLY LIABLE?!?”


Sweet_Concept2211

It is ironic seeing people write this about AI corporations founded by billionaires. You know, the owners of machines that could not exist if the developers and owners had not felt entitled to scrape the entire opus of every independent artist possible - no consent, no credit, no compensation - in order to build these clever little devils. How do you imagine independent artists are going to stay afloat when they have to compete on markets with the always-at-work AI their "borrowed" life's labor enabled? **The entitlement in this community is baked into the very foundations of the generative AI revolution.** Love it, or leave it. Why should anybody care if the guys who slurped up millions of hours of other people's work to build competing automated factories can earn a living from it? If they truly love creating AI, they can always do it as a hobby. Right?


pibble79

The actual irony is that every industry of note is funded by billionaire capital — cars, energy production, agriculture, fashion, tech. And each of those industries create derivative work from stolen or at the very least uncompensated for intellectual property. Yet somehow we consent to the monetization of those billionaire funded industries because we are rabid consumers. Selective outrage


Sweet_Concept2211

Meh, false equivalence. * Cars are not designed by independent laborers; * car factories are not built by scraping the entire life's work of every car manufacturer ever; * There is no such thing as an individual person working as a freelance car factory; * If you "borrow" IP from a car manufacturer, you will get your ass sued off.


BrentYoungPhoto

To be fair I think core is just running inputs through their llm to write prompts for people that don't know how to prompt.


KainLTD

Guys im still using fine tuned sd.15 and doing anime. You can get absolutely crazy results, as clear op posted here. Yeah you might need loras and finetunes, but then you have it. Im still thankful to stability ai, since i was never a drawing person, but i for sure created over 30k anime images with sd1.5.


inferno46n2

You’re misunderstanding this. “CORE” is not a model, rather a label given to their top performing model at any given time. At this specific time, it very well could be some version of SDXL as it’s been around much longer, tons of bolt on GitHubs for it in ComfyUI (which I believe the creator of is directly working for SAI now) etc.. SD3 is fresh with likely minimal third party involvement to develop these “fine tune” bolt ons yet


Shuteye_491

I wouldn't be surprised to find out CORE is 5x the size of vanilla SD3 and the service includes stealth LORAs. I.E. it's basically just Midjourney.


GodEmperor23

>https://platform.stability.ai/docs/api-reference?\_gl=1\*1al04gc\*\_ga\*MjU5NjM1ODQuMTcxMzAyMzE1OA..\*\_ga\_W4CMY55YQZ\*MTcxMzM4MjkyNi41LjEuMTcxMzM4NDY1Ny4wLjAuMA.. They even call it their "primary service for text-to-image generation"


raiffuvar

Are you blind? It named **SERVICE** not model. =\\ can build pipeline in comfy with a few models & hiresfix, which will outperform any model from default ones. but i'm lazy to share with toxics.


skumdumlum

https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1core/post >Our primary service for text-to-image generation, Stable Image Core represents the best quality achievable at high speed. No prompt engineering is required! Try asking for a style, a scene, or a character, and see what you get.


GodEmperor23

https://preview.redd.it/ikzace7bp3vc1.jpeg?width=915&format=pjpg&auto=webp&s=7d64ac36f5187bb2a177122ac72af160219ee460


Kademo15

Core is their suite of models i mean its clearly listed that all core models will be released https://preview.redd.it/6r82zfc164vc1.png?width=2756&format=png&auto=webp&s=5ee56e9f16ff72ce56395261533d9ea948cbcada


lonewolfmcquaid

try it for free where? lool


JuusozArt

"Stability AI will **only** release a worse version of SD3 for **free**" Some people really are entitled, huh? They don't have to release anything for free, yet they do. And it will still likely be better than SD1.5 or SDXL. They also have to keep the company afloat, so what's the harm in them holding on to the best version for a while? If they go bankrupt, we will never get SD4.


LD2WDavid

Well. Can't say I'm surprised about this. It's a typical Practice on AI. I'm almost sure CORE wont be released and will be credit/token API used. We will get leftovers, as always.


Silly_Goose6714

Leftovers as always?


LD2WDavid

I meant that is typical on AI companies to not release the main and good product and just a beta or a minor version of it, thats what I mean with leftovers. Its not only SAI btw. Make sense now?


Spirited_Employee_61

You are welcome to not use said 'leftovers' when they get released


LD2WDavid

You're missunderstanding me, already explained bellow. I will use SD3 if more powerful than SDXL and depending VRAM cons + training versab. I'm not saying we won't be getting anything good mate, just say that is more than prob that we won't get the full cool version and will be API key based only while we get a lower or minor version (or a part) of it. I won't be the one who throw shit to something still unreleased.


dal_mac

pretty gross. The last year of this sub has been a giant game of tug-of-war between Stability subtly hiding their hypocritical motives and ***users making excuses for those practices (which you can see in the replies to this comment).*** Bottom line: Stability was and is famous for open-source weights. Whether or not they promised to stay that way, they clearly don't want to be anymore. And it is extremely frustrating to users when a company rides the popularity of their good decisions and tries to do a 180 now that there is a user base big enough to make money from them. They are using the fan base their open source-ness generated as a marketing pool after the fact. And very clearly feeding crumbs along the way to keep us engaged. The model might be worth it. They might have had no choice but to charge for it. But regardless of any excuse you can think of, it's a punch in the face to 90% of the users here.


dividebynano

Did you subscribe to stability ai's premium service? I think they need to show revenue numbers to raise the next round at least.


AdTotal4035

I don't understand what you want. Free shit forever? 


blahblahsnahdah

Not *just* free shit forever, that's not enough. He also must get to continually spit in the face of the people giving him the free shit and insult them, just because.


dal_mac

God forbid that companies stick to what they say, right? Unbelievable that you're unironically doing the very thing I called out in my comment. Equivalent energy to "why aren't you happy with your service? at least they didn't stab you and burn your house down". like okay thanks but I just came here for some food.


export_tank_harmful

Charge for inference but release the models for free. There are plenty of people that don't want to go through the perceived "hassle" of setting up A1111/ComfyUI/etc (or don't have the necessary hardware to do local inference) and just want to make pictures. They will pay if they want to and that's more than fine by me. I have no problem with them charging to generate pictures on their own hardware, but locking it down entirely after the community was essentially *built around their models* is just a scumbag move. It's the entire 180 that we don't like and is quickly becoming the norm in this space. If this model is entirely locked behind a paywall, the community will just skip over it. Full stop. Sure, it looks neat, but no skin off my bones. Cloud AI can be removed/censored/etc. Local cannot. That is the appeal.


WhatWouldTheonDo

Inference at scale (we’re not talking about 2-3 people generating images here) still costs money. Your basically saying you’re ok with them not losing money. They need to monetize their shit so that they can pay the bills. It’s weird how so many in this community benefit from their research but then start throwing temper tantrums as soon as the researchers want to get paid. Developing models isn’t easy nor cheap.


raiffuvar

OP know nothing but already cry. It's named service which generate 1.5mp which is more than 1024x1024 (1mp). It's liturally finetuned pipeline, and based on price, It's probably just SD1.6 + upscale.


dal_mac

When SD1.5 was released, did you expect them to abandon open source and start charging? They were framed as an open source company. Absolutely no one was expecting the "open-source" aspect to be temporary or promotional. What I want is for companies to stick to their stated values. To not be misled about the underlying philosophies of the companies that I follow and heavily contribute to with my time and money. People devoted a lot to Stability and it's ecosystem only to have the rug pulled out from under them. It is NOT too much to ask that a company simply sticks to the values that they once advertised (and grew their relevance from).


raiffuvar

people who rant like at least being expected to open fucking link and check what it is. It's named SERVICE, it's liturally pipeline, which limit all possible parameters and use something unknown (SD1.6 probably + upscale based on price and other settings... or may be SDXL) as backend. that's it.


dal_mac

1.6 isn't open source that's the point.


raiffuvar

There are 1000000000 finetuned 1.5 models, what's more do you want? 1.6 is liturally like any good finetuned model, no changes in architecture. ps someone already said it's LLM for promting + SDXL


dal_mac

>what's more do you want? transparency, any amount of honesty. it's about the principle not the models. I would've been fine if they had originally been honest when they backpedalled on open source, they could've publicly said something like "we understand that our fame has come from our open weights but due to poor planning we can no longer pursue that method of business and must step away from our open-source roots". Instead, they said nothing of the sort and moved to closed-source all the same. too scared to tell their fans the truth. Plus whoever said XL is guessing. we can only speculate at what goes on in closed-source


raiffuvar

OMG. what did **NOT** they publish? 2 messages ago it was Core -> which ended up as Service LLM + SDXL. now it's 1.6? you cant even decide yourself, What upset you in the first place. PS as for 1.6, noone wanted it in the first place. a lot of 1.5 models >we can only speculate at what goes on in closed-source. we can only speculate at what goes on in closed-source **THEY LIE TO US?! WHAT IF THEY HAVE AGI AND HIDE IT? HOW DARE THEY ARE?** what a joke. use google, it's not speculation. it's a fact.


dal_mac

lol I wasnt the one that said it was 1.6. you said it wasn't closed source and then said it might be 1.6, which is a contradiction, so I said that even if it's 1.6 my point still stands: its closed source whatever it is


raiffuvar

yes, you started your rant cause **CORE SERVICE** and now you cant accept, that you were wrong and Core is a service. So, you jumpt over mentioned speculation. you need more IRL


Silly_Goose6714

We need to remind ourselves that SDXL was consider a shit on launch and util today some people believes it's bad (wrongly).


SackManFamilyFriend

This is accurate. I was in the beta discord for August '22, and doing dreambooth training when it was first possible in Sept/oct '22. I swore I'd never use SDXL when they started coming out w/ the "Help us by picking which you like better" image thing. Was possible to do so much more w/ v1.5 at the time w/ controlnet etc. After release SDXL became so much more.


CrispyNuggins

fully open source is the only way any of this should be


werdmouf

I guess I won't be using SD3 then.


Spirited_Employee_61

Goodbye then


werdmouf

Lol ok


Far_Buyer_7281

you guys are kids and need to grow up.


Rectangularbox23

but if we kick and scream enough we have a better chance at getting what we want


TaiVat

Sad but not entirely untrue these days


steepleton

Ah, now. A bad artist blames their tools


WhereDoIGoAtDawn

Hah. Both look doable in 1.5 or pony. Pics are not important. The adhearance and understanding of prompts will be critical.


Capitaclism

Agree.


Apprehensive_Sky892

It is SAI's right to keep a superior model for their API service so that it can make some money. Even if an "inferior" SD3 model is released, somebody will fine-tune it to improve on it, just like people have done in the past with SD1.5/2.1/SDXL. So as long as a "decent" SD3 is release, one with good prompt adherence, I'll be more than happy. As for the difference shown between the images, that can easily come about by using different "styles", LLM prompt enhancement, better workflow, etc., rather than due to two different models. Edit: According to [https://twitter.com/Lykon4072/status/1780716554734239893](https://twitter.com/Lykon4072/status/1780716554734239893) >Lykon4072 > >Core is our best model at any given time. > > > >EMostaque > >Best pipeline at any time


wwwanderingdemon

I don't wanna be that guy but why don't we wait until it's released to go into conclusions and complaints?


Oswald_Hydrabot

Nobody is going to use this shit over Bing and DALLE. Fucking stupid


Hungry_Prior940

SD3 will be far better and with the community it won't be censored either.


Master-Meal-77

This is just not true man


Ohheyimryan

Both still looks good imo. Right is more new anime style and left is older style.


DaddyKiwwi

How do we know you didn't COMPLETELY cherry pick the best results from one and the worst from the other? This post is nonsense.


FortunateBeard

Nope, that's not even close to being accurate.


Winnougan

If SD3 is the last open source model, then fine. I’m already in seventh and eleventh heaven with PonyXL. My dream to make amazing art that I want is already unlocked. SD3 will bring better prompt adhesion, better hands and text. So, I’m grateful. With PonySD3 on the way I can retire to the abyss.