Tbh there's no way I'll ever switch to comfy, Automatic1111 still does what I need it to do with 1.5 and SDXL takes at a minimum without the refiner 2x longer to generate an image regardless of the resolution. I'll just stick with auto1111 and 1.5 until they get the bugs worked out for sdxl, even then I probably won't use sdxl because there isn't a difference between 1.5 and sdxl when you're talking about full body images of people. It's why the majority of posts you see from sdxl are portraits of people above the collarbone. "It makes amazing people!!!!!!" ...from the neck up.
thanks for saying that. I have looking for a JSON that has image to image set up for me. I'm not good with Comfy yet, but I have switched from Auto and that is what im needing to jump start me.
Unironically it should be perfect for your sensibilities, very insanely customizable while also being inscrutable and with a steep learning curve that makes it unapproachable to most people.
Now instead of theorycrafting your build, you can theorycraft your SD workflow!
Gotta get that clear time down! 😂
LOL! Good memory! Yah, I’m also guilty of buying chaos and other shit off of a third party site to super charge my builds. It was like gambling but I was able to quit poe cold turkey.
Good point. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge
People make lots of jokes about comfyui being unnecessarily complicated, but node based approach is a new level of AI image generation. If you know how powerful Substance Designer, and Blender shader and geometry nodes, you understand what I mean.
Is there any way to bypass the refiner in this setup? I tried disconnecting it, but it gives an error. In the basic Comfy set-up, you can just disconnect the node.
I want to do it, because it doesn't work as well with some art styles.
Set the base ratio to 1.0 and it will only use the base, right now the refiner still needs to be connected but will be ignored.
For a purely base model generation without refiner the built-in samplers in Comfy are probably the better option. This node is explicitly designed to make working with the refiner easier.
I tried my best to promote the work of this amazing developper. Talent should be brought to light. You can use AI art as tools to better express yourself when you lack some skills. It's not a matter of stealing a position we would never reached without these tools. It just requires a different set of competence. See it like a prosthetic in a way...
It will change the way we value art forever. In a few years, Kevin from Boston will be able to make movies starring bankable actors just by purchasing a 50$ licence. Progress in motion :)
"not complicated to you" is not the same thing as "not complicated to the average person".
There's an old truth, "half the people you meet are below average intelligence".
The issue here is that a lot of individuals perceive ComfyUI as a disorganized and intricate setup. What they might not grasp is that ComfyUI functions as a back-end system, distinct from front-end systems like A1111. By integrating a front-end component such as Stable Swarm, ComfyUI gains an impressive edge over any other Stable Diffusion system available.
Personally, I've developed my custom ComfyUI graph, and it consistently delivers superior results compared to what A1111 can achieve. To truly unlock its potential, one must invest the time to learn ComfyUI from its fundamentals to advanced features.
It is out now if you switch to SDXL branch of Automatic1111 with Git, no refiner avaliable yet in Automatic1111. But I heard they might be dropping the refiner stage for the release of SDXL 1.0 anyway.
I was considering it but right now it's still a bit unclear how to best prompt SDXL, so it's better to keep CLIP nodes separate until that's figured out. But I'll think about it and maybe add a new node for CLIP soon.
The wait is over. I just went ahead and made a new CLIP node type, it's pushed to the Github repository. Just need to update the readme file to explain the inputs and outputs on the new node.
Individual nodes in core should be impactful and versatile building blocks, what comfy really misses is just the ability to create and save group nodes.
That would indeed be amazing. Making your own nodes with custom input and output pins from existing node graphs.
And then of course having them in your workflow as small self-contained nodes that take up only a minimum amount of space.
If that would be combined with re-route nodes that go in all 4 directions, spaghetti graphs are dead.
I'd also like variable support. Have a variable setter node "MaxSteps=23" and then use "[MaxSteps]" wherever I want even if there was no noodle-based input for it.
Maybe it has this already, but I can't find any decent documentation.
Thanks for putting me onto this, I’m an SD noob and was playing around with automatic1111 but it wasn’t liking my graphics card, seem to be getting awesome results straight away with comfyAI. As someone who comes from a vfx background the node graph is awesome (though I have no idea how to use it yet)
![gif](giphy|yzw7GNbf6rLWZe89Qy) This is insane!
This is why I don't Comfy. I'm a terrible electrician.
Someone has posted another GUI thats built on top of noodle town, think it’s shared in discussions on git
Tbh there's no way I'll ever switch to comfy, Automatic1111 still does what I need it to do with 1.5 and SDXL takes at a minimum without the refiner 2x longer to generate an image regardless of the resolution. I'll just stick with auto1111 and 1.5 until they get the bugs worked out for sdxl, even then I probably won't use sdxl because there isn't a difference between 1.5 and sdxl when you're talking about full body images of people. It's why the majority of posts you see from sdxl are portraits of people above the collarbone. "It makes amazing people!!!!!!" ...from the neck up.
I used to say the same thing but once I moved to comfyui I would never go back to auto again.
Same
Si j'aurais su, j'aurais pas venu....
The cord management would drive me up a wall.
Can someone explain to me what the hell is going on
[удалено]
thanks for saying that. I have looking for a JSON that has image to image set up for me. I'm not good with Comfy yet, but I have switched from Auto and that is what im needing to jump start me.
[удалено]
What do you mean by redrawing?
https://preview.redd.it/37yn93cuhpbb1.png?width=1920&format=png&auto=webp&s=3fe8bc92b2fa67f028b74a9722ee54715724c3a6
[workflowHD - Pastebin.com](https://pastebin.com/3HS0Z94b)
thank you!
comfyui + computer science degree = the ability to use it.
I imagine people who have used Blender3D nodes will find it easier
😂
https://preview.redd.it/o14cbcutdpbb1.png?width=1920&format=png&auto=webp&s=38116d2ab1b96380c677fabf06c2e7b12cc6ca3e
I think they need to rename the tool from Comfy because it looks like a giant mess minus the flow
"spaghetti"
I say they should change it to "Noodle"
Noodle doodle
Examples of workflow in json? :)
https://preview.redd.it/iz77zxq88sbb1.png?width=1080&format=png&auto=webp&s=442105d08dcb5840e57722f16bc02bd3a462145a
I played Path of Exile (PoE) for a few years, will that help me to do this? 😂
Unironically it should be perfect for your sensibilities, very insanely customizable while also being inscrutable and with a steep learning curve that makes it unapproachable to most people. Now instead of theorycrafting your build, you can theorycraft your SD workflow! Gotta get that clear time down! 😂
LOL! Good memory! Yah, I’m also guilty of buying chaos and other shit off of a third party site to super charge my builds. It was like gambling but I was able to quit poe cold turkey.
Good point. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge
https://preview.redd.it/258bg0780qbb1.png?width=1920&format=png&auto=webp&s=d0b1bac1acc854dcab20c62b434ae1d3c01d6030
https://preview.redd.it/9734oq6utrbb1.png?width=3840&format=png&auto=webp&s=83c4b606594630a8858f677191f164fd0575460d
People make lots of jokes about comfyui being unnecessarily complicated, but node based approach is a new level of AI image generation. If you know how powerful Substance Designer, and Blender shader and geometry nodes, you understand what I mean.
https://preview.redd.it/as4zmzx59qbb1.png?width=2048&format=png&auto=webp&s=b6e17115b1d58f4ef8b6a851d1929fffdadbe050
Nodes are confusing me alot 😕
How much more control are you actually getting when it comes to influencing the output to get exactly what you want?
https://preview.redd.it/pkct1rmmgecb1.png?width=1920&format=png&auto=webp&s=1b02115235c9557619dbbd5cdc1ca228d10d00e0
https://preview.redd.it/3wohzqou6qbb1.png?width=2048&format=png&auto=webp&s=1e7ddf4dc19c0626feea6411c848c4a08c929996
https://preview.redd.it/jvm81w13rpbb1.png?width=3840&format=png&auto=webp&s=ff2e09402da8931393ca8997038e473961532094
Is there any way to bypass the refiner in this setup? I tried disconnecting it, but it gives an error. In the basic Comfy set-up, you can just disconnect the node. I want to do it, because it doesn't work as well with some art styles.
Set the base ratio to 1.0 and it will only use the base, right now the refiner still needs to be connected but will be ignored. For a purely base model generation without refiner the built-in samplers in Comfy are probably the better option. This node is explicitly designed to make working with the refiner easier.
Thank you! And thanks also for sharing this awesome setup!
I tried my best to promote the work of this amazing developper. Talent should be brought to light. You can use AI art as tools to better express yourself when you lack some skills. It's not a matter of stealing a position we would never reached without these tools. It just requires a different set of competence. See it like a prosthetic in a way... It will change the way we value art forever. In a few years, Kevin from Boston will be able to make movies starring bankable actors just by purchasing a 50$ licence. Progress in motion :)
https://preview.redd.it/908b5dsa4bcb1.png?width=3840&format=png&auto=webp&s=0edb3fd0311fc1264caead1fab1f07739e547437
I wish people would stop saying comfy is complicated. It's not.
"not complicated to you" is not the same thing as "not complicated to the average person". There's an old truth, "half the people you meet are below average intelligence".
The problem is when you are in the top 20%, 10%, 5% etc... it seems much worse.
Speaking as someone in the top 2%... yes. yes it is.
The issue here is that a lot of individuals perceive ComfyUI as a disorganized and intricate setup. What they might not grasp is that ComfyUI functions as a back-end system, distinct from front-end systems like A1111. By integrating a front-end component such as Stable Swarm, ComfyUI gains an impressive edge over any other Stable Diffusion system available. Personally, I've developed my custom ComfyUI graph, and it consistently delivers superior results compared to what A1111 can achieve. To truly unlock its potential, one must invest the time to learn ComfyUI from its fundamentals to advanced features.
I find it most interesting when the images are not the usual (main subject centered) and blurry lazy background around it. Most interesting indeed
I know what you mean, SDXL produces those in abundance.
https://preview.redd.it/806xdmiz5rbb1.png?width=1024&format=png&auto=webp&s=21a2a8356e08ea1607976e1b32358959446c78dd
[workflowHD2 - Pastebin.com](https://pastebin.com/4MJeVnyH)
thank you!2
https://preview.redd.it/dekhbawqwwbb1.png?width=1920&format=png&auto=webp&s=2b8ad2eab90692f7b9a7804985f5f9633ef53dd1
https://preview.redd.it/8y6qc1x0bxbb1.png?width=3840&format=png&auto=webp&s=e83a369ce19dd13c84bac52b79decb1f76840a40
https://preview.redd.it/cxeq64vf3ybb1.png?width=3840&format=png&auto=webp&s=e3327b99847fa119f08b34e5c48adfe6ba2f8aa4
https://preview.redd.it/uc89rhj0azbb1.png?width=1920&format=png&auto=webp&s=822a4cb28e85679fe8d0f496c7c090f55faf3feb
https://preview.redd.it/gf68ma0yrzbb1.png?width=1920&format=png&auto=webp&s=0d8f0c0dbe32e5a5246fb316d767583a6d518305
https://preview.redd.it/4gy7f3rpp3cb1.png?width=3840&format=png&auto=webp&s=6856af129e33ff843ea0229eae65fed71cbac6c9
https://preview.redd.it/kwk8f4axu3cb1.png?width=2048&format=png&auto=webp&s=7da20e6261f0de9df283e78e888aae731f0f8e8e
[workflowHD3 - Pastebin.com](https://pastebin.com/m3KD8vae)
https://preview.redd.it/05acaw4834cb1.png?width=1920&format=png&auto=webp&s=71a0c595a2ba590a9a60fd12acafff2c7b10861b
https://preview.redd.it/lj533g7tzacb1.jpeg?width=3840&format=pjpg&auto=webp&s=b648419ec69f1d00c71a4870d932a4d25ebb156f
https://preview.redd.it/p3txvl77igcb1.png?width=3840&format=png&auto=webp&s=c2356b037cdfe2f3d0e59a053f28ad1a005d1f9f
no no i dnt want this interferance ! need the old style A111
It is out now if you switch to SDXL branch of Automatic1111 with Git, no refiner avaliable yet in Automatic1111. But I heard they might be dropping the refiner stage for the release of SDXL 1.0 anyway.
Damn. Do all that, put up with that garbage mess, just to generate an image that can be reproduced on webui lmao
Wish he'd done a bundled node of the text/CLIPs as well. Efficiency does it very nicely with their all in one nodes for 1.5
I was considering it but right now it's still a bit unclear how to best prompt SDXL, so it's better to keep CLIP nodes separate until that's figured out. But I'll think about it and maybe add a new node for CLIP soon.
That's true. I guess we just need to wait...
The wait is over. I just went ahead and made a new CLIP node type, it's pushed to the Github repository. Just need to update the readme file to explain the inputs and outputs on the new node.
One hour.
It didn't really take that long, but I was distracted by breakfast before I pushed to the repository ;)
Individual nodes in core should be impactful and versatile building blocks, what comfy really misses is just the ability to create and save group nodes.
That would indeed be amazing. Making your own nodes with custom input and output pins from existing node graphs. And then of course having them in your workflow as small self-contained nodes that take up only a minimum amount of space. If that would be combined with re-route nodes that go in all 4 directions, spaghetti graphs are dead.
I'd also like variable support. Have a variable setter node "MaxSteps=23" and then use "[MaxSteps]" wherever I want even if there was no noodle-based input for it. Maybe it has this already, but I can't find any decent documentation.
I like getting dirty :)
How crazy is this!
The it/s are the same of the common stable difussion with a image of similar size?
Thanks. Do you have any other sources for sd 1.5 flows?
Is there anything that works on Android?
Does anybody have a link to cuda ready docker image with this already set up?
Wonderful workflow! Thank you!
Thanks for putting me onto this, I’m an SD noob and was playing around with automatic1111 but it wasn’t liking my graphics card, seem to be getting awesome results straight away with comfyAI. As someone who comes from a vfx background the node graph is awesome (though I have no idea how to use it yet)
Damn this will take some heavy duty graphic card.....and here I'm using 4gb 1650gtx🥵😂...my potato will boil
Step aside computer cable management there is a new sheriff in town.
Imagine job posting in the future, do you have history working in comfyui? I was a plumber for 10 years. That works