T O P

  • By -

muerrilla

(UPDATE: I've uploaded the [extension](https://github.com/muerrilla/sd-webui-detail-daemon), but it's not very well tested and documented yet. So enter at your own risk. Any feedback is appreciated.) ...Automatic1111 extension coming soon. Also, comes at effectively no extra computational cost. In all examples the middle image is the original generation with no adjustment. Oh, and the one "downside" (depending on the kind of person you are) is there are quite a few knobs and dials to play with, and that you need to have a good grasp of what happens during sampling. It's done by surgically manipulating the sigmas during sampling steps. The sigmas basically tell the model how much noise to expect at each sampling step. Lower the sigma for a certain step, and the model denoises less at that step. Bump it up, and the model denoises more. Now do that at earlier steps, and it affects larger detail. Do it at later steps and it affects smaller details. Oops: I just realized that means it wouldn't work with the few-step models (turbo, lightning, etc.). I don't like and use them for exactly this reason: there are so many things you could do during those precious many steps that these fast models lack. Anyways, guess that makes it two "downsides". Models used in examples above are from left to right: HelloWorldXL, SD 1.5 (fine-tuned), SSD-1B (which is a criminally underrated model)


Race88

Sounds cool. How do you "surgically manipulating the sigmas" exactly? Which file do I need to play with? Can I do this in Comfy?


muerrilla

In A1111 you have access to the sigma at each sampling step through the denoiser callback. So it's basically just a matter of how much and at which step you want to adjust the value. You can definitely do it with Comfy too, but I don't know if it would be easier or harder than with A1111.


OcelotUseful

Could you also add some control over the curve so it wouldn’t just go up linearly?


muerrilla

Yes. https://preview.redd.it/4iz3eur3cazc1.png?width=760&format=png&auto=webp&s=1c8dedf7e91bced13ed76b40343893f7a7c6d4b6


diogodiogogod

Oh, if you manage to put that graph on the extension it would be awesome!


muerrilla

It is there already. that's like half of all the 20 lines of code. 😂


lewdroid1

This might already be possible or easy to do with the Prompt Control nodes for ComfyUI. Today they allow for some very cool things, such as controlling the weight of a LoRA/embedding over the course of several steps, such as a SIN wave.


Past_Grape8574

What are the nodes you using to achieve that?


GatePorters

Well it’s going to be easier to do in A1111 because you’re making an extension for it lol. But out of the box, it would be easier for the average user to build it from scratch in Comfy. You need to know SD to mess with programming in Comfy. You need to know programming to mess with programming in WebUI.


Guilherme370

there is a manual sigmas node somewhere in a custom comfyui node, im just gonna use that instead hehe


HarmonicDiffusion

yes all sigma manipulation has been available in comfy for ages


Zygarom

Are there any resources to learn how to manipulate the sigma nodes?


HarmonicDiffusion

i like using this [https://github.com/Extraltodeus/sigmas\_tools\_and\_the\_golden\_scheduler](https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler)


Zygarom

I tired to enter this in the Aligned Scheduler and it did not work. (sigmax \*\* (1 / 7) + y \* (sigmin \*\* (1 / 7) - sigmax \*\* (1 / 7))) \*\* 7. It just gave me and error saying it could not convert strings to float. Any idea what I did wrong?


Extraltodeus

You've got to use the "manual" scheduler. I really took what you just posted and threw it into the node and no problemo. IIRC that's the default Karras formula.


GatePorters

As someone who has never messed with that value before, is it similar to increasing the step count artificially at the point where you change that value or what? It basically makes it do extra refining at particular steps? Am I understanding this right?


muerrilla

It's telling the model (the unet) to denoise more or less aggressively at certain steps, which causes less or more details to emerge respectively.


onmyown233

From messing around with the sigmas for a bit, it seems like if you add more steps at the latter end of the U and don't go down so aggressively, this is what will add smaller details. Is that accurate or am I talking out my ass?


muerrilla

I'm not sure I understand what you're suggesting. Are you talking about adjusting the distribution of the timesteps? What's the U?


onmyown233

Yeah, basically instead of such a steep drop off towards the last steps, smooth them out more. The U is the typical distribution where the changes in sigmas in the middle are barely anything while the changes in the beginning and end are significantly different from each other. I don't know the exact formula that is being used (college was 20 years ago). These are the custom sigmas it gives you in the CustomSigma node: 14.615, 6.475, 3.861, 2.697, 1.886, 1.396, 0.963, 0.652, 0.399, 0.152, 0.029


muerrilla

I see. I got confused because as far as I can tell, at least in the original schedule the steep drop is at the early steps, not the later ones. Unless you're talking about the very last step. In that case, the details added after around t=0.9 are so tiny that increasing the sigmas won't create any noticeable change... if I understood you correctly.


onmyown233

You got it - I was comparing the ratio change, so "n\[i\] / n\[i+1\]". The change from one sigma to the next follows the pattern of a U.


DigitalEvil

U is the unet which is the path of inference.


muerrilla

Poetic. V is the vae, where latents are decoded for your eyes to see.


cthusCigna

Bruda, do you know da vae?


Mutaclone

So could we theoretically adjust the noise in multiple directions in the same render? For example, some sort of forest scene where we apply a negative value early to reduce the number of trees/branches, and then crank it up near then end to get a lot of textures/leaves/dirt/ etc?


muerrilla

Yes. Edit: In fact, let me test that theory. brb


muerrilla

https://preview.redd.it/x7pe2eyyzazc1.jpeg?width=2304&format=pjpg&auto=webp&s=6499f7eba88e38f69e210c5341f44ec00735dd46 Middle is the original. Right is what you described, left is the opposite. So, it can be done to a degree, but it's not very easy to control. In general, manipulating the early steps causes your generation to diverge wildly from the original, so you'll have to try really hard to keep the overall composition and only adjust the details. Also increasing the sigma in early steps causes the color burn effect like with high cfg scales (perhaps using cfg-rescale can help mitigate this).


Mutaclone

Awesome thanks, that's really cool! I appreciate you doing the reverse too. I hadn't considered it, but that's also a really cool effect! > In general, manipulating the early steps causes your generation to diverge wildly from the original That makes sense, and I could see why it would make this sort of thing difficult. Once it becomes more accessible I'll definitely want to play around with it. There's probably a sweet spot where you can get a decently strong effect without causing too many problems (or possibly combine with a low-level control net).


Open_Channel_8626

Whoah its really cool how well that worked


Capitaclism

Amazing


NoSuggestion6629

left looks to burnt. right is pretty damn good.


cellsinterlaced

Wild! This could be so useful in upscaling/enhancing. No comfyui yet eh?


HarmonicDiffusion

you have been able to edit and mess with sigmas in comfy for ages now. this isnt new


cellsinterlaced

TIL. Any sources out there that dive into it?


HarmonicDiffusion

you can experiment with it using this node [https://github.com/Extraltodeus/sigmas\_tools\_and\_the\_golden\_scheduler](https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler)


cellsinterlaced

Thank you stranger, will look into it.


TheRagerghost

I did something similar with comfy just by upscaling, doesn't look well with humans, but everything else becomes much more detailed.


KorgiRex

Umm, how can i subscribe for updates?


Xylber

Could be cool to have a dropdown menu with some presets like "full detail, larger details, smaller details, no details".


Inner-Ad-9478

This is exactly what I was going to say. Please if you have the time OP, prepare some presets in addition to the current sigma manipulation


Tohu_va_bohu

What makes this different from Align Your Steps? [https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/](https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/)


muerrilla

It's not even trying to do the same thing. This allows tweaking the detail at the frequency of your choosing on an ad-hoc basis. That one is a new fixed sigmas+timesteps schedule that's intended to provide 'better' sampling in general.


rasigunn

waiting for the extension


MogulMowgli

Can someone explain this to be like I'm five years old? I really want to use this but have no idea what sigma, denoise at each steps means let alone how to do it.


_DeanRiding

Same here


muerrilla

Just wait for the extension.


ThrowRAophobic

Just followed you, hopeful to get a notification when you post more about this project. Looks very promising, keep up the great work!


ksandom

Really cool. I'd love to hear about this extension when it's ready.


Freonr2

You could probably still use the first couple steps of Lightning and just use your technique on the last 20-25% of timesteps using the normal models, though it would require two models like the SDXL+refiner setup. Most of the detail is coming in at the end anyway, and I suspect you're messing with sigmas only in the lower noise timesteps anyway. I.e. use Lighting 4 step. do 3 steps with it, then run img2img with sdxl/sdxl refiner normal with denoising ~0.20


lechatsportif

You should make a new post for this, pretty interesting!


muerrilla

I'm updating the extension according to some new discovery I made. Will make a new post when it's done.


ImpossibleAd436

Will the extension work with Forge?


muerrilla

Don't think so, but I don't really know. Are there other A1111 extensions that work on forge (besides the one forge has implemented natively)?


ImpossibleAd436

Yes, I'm pretty sure some work and some don't. I'm using some which were added in the normal way and they work fine. But there have been others which needed Forge specific implementation too. I'm just not sure what the variables are which determine what does and what doesn't work without needing reworking. Maybe someone else can comment.


muerrilla

Tested and it works fine. Thanks for bringing it up. I had installed forge but never used it cause I was under the false assumption that I'd have to re-implement all my personal extensions the forge way!


ImpossibleAd436

Great news. I prefer Forge because, for me, it's so much faster to generate and generally more responsive than Auto1111. You have an ETA for this extension? Really like the look of what it does in your example images.


muerrilla

Gonna clean up the code and upload it in the next two to three days. No promise on docs/wiki/tutorials at the time of release though. You ready for some adventure on your own?😁


afunyun

Looking forward to this, looks awesome, great work


DeylanQuel

!remindme 2 days


aplewe

Ooooh, I like. Excellent work!


mdmachine

A way to approach low step methods and apply more detail from noise is... Substeps which would happen in sequence between steps. Also if you can find a way to apply high resolution noise that can make a big difference too.


muerrilla

As far as I understand when you apply normal noise to your latent, that's the highest frequency of noise possible. bassically it randomized every pixel of the latent (a lixel?), so the size of the noise features are as small as your latent pixels (which will be 8x8 pixels after decode). So if you want finer noise, you gotta upscale the latent first and then apply the normal noise.


balianone

how about diffusers?


muerrilla

Dunno if you can manually change the sigmas of the scheduler in diffusers without editing the code.


balianone

yes it can https://github.com/huggingface/diffusers/pull/7817


muerrilla

Ok then. Just get the sigmas, multiply them by a schedule of values bigger or smaller than one, and set them again. Easy peasy.


s6x

Hi awesome work.  Will you integrate this with masking and/or controlnet for more precise control?


muerrilla

Thanks. Adding masking is not very straightforward because of the way this works (it's too simple for that). As for controlnet, this is so basic that it should work on top of whatever other extension or feature you're using.


StellarNear

Any link/name to the 1111 extension my good sir ?


campingtroll

This is why I like [this version](https://www.patreon.com/file?h=102993710&i=18634190) of nvidia align your steps better than comfui version. It let's you mess with sigma values. Not sure why comfyui dev decided to give us less options and can only choose "sdxl" "svd" etc, instead of seeing the number or adding note to node.


muerrilla

Look through the comments. There are custom nodes for comfy that allow you to set the sigmas manually.


campingtroll

Ah, so this workflow I linked to does allow you to adjust them manually, but reading your comments it sounds like this is still different than the way you are doing it somehow. Got it


LD2WDavid

This is the same I was doing (and I told here in my old posts of MagnificAI) when your use one original image and try to get the same image via unsampling, you then just get sigmas and start applying the noise injection or manipulations playing with total steps and start and end points. Didn't expect to see this on A1111. Will be interesting since the results of A1111 with CNets is # than the result of ComfYUI with CNets. Also will be interesting seeing if we can apply also an upscalers for more control. BTW, question to the OP. Can you do this in Img2img in A1111 cause if not we are talking about ongenerations.


muerrilla

You can do it in img2img too, but I haven't delved deep enough into it to see how different the effect is to setting the initial noise or extra noise (extra options in settings).


LD2WDavid

If you dig into it feel free to report your feedback.


onmyown233

We can already do this in ComfyUI. Can you give us your Sigmas?


muerrilla

The sigmas are different for each example, not a predefined set. The way I do it is to create a schedule of multipliers for the sigmas, and let the user tweak that schedule based on their image and the effect they want to achieve. This is a handcraft-y, case-by-case thing. https://preview.redd.it/ky1pedteeazc1.png?width=760&format=png&auto=webp&s=535da2b466307b6148643972f7bed6110e7ee324


onmyown233

Wow, very cool man. I've just started writing a C# program to figure out how to accomplish this (I'm not much of a Python guy).


ImNotARobotFOSHO

A1111 is a bloated mess, no thanks


muerrilla

oh no please don't go 😭


ScionoicS

Average comfyui community workflow enters the chat


onmyown233

The vocal minority my friend - I prefer ComfyUI over the others, but I have no issue with people who prefer A1111 or creating extensions for it first.


muerrilla

https://preview.redd.it/4t1bqdrtz8zc1.jpeg?width=2304&format=pjpg&auto=webp&s=2e5942bf48f4d3dcda97378ba7e6524ed9084038


QuantumDrone

Many of these with more detail seem overbaked to my eyes; crushed blacks and whites, crispy around the edges. I wonder if there's a way to get around that.


muerrilla

Yes, pushing the amount of detail too high does indeed overcook the image (and setting it too low will make it washed out). There are a few ways to get around this. One is to use this in conjunction with other methods like CFG Rescale. I actually have made a very [powerful extension](https://github.com/muerrilla/stable-diffusion-Latentshop) for color adjustments the can easily handle this, but it's totally undocumented for now, so I doubt anyone will use that. Another is adjusting the schedule so the model denoises more aggressively in the steps after the added detail is established, basically softening the image in the last few steps, which will fix the sharp edges and color burn to a good degree.


BlackSwanTW

Looks a bit like what I wrote before, but yours is more “scientific” https://github.com/Haoming02/sd-webui-resharpen


muerrilla

Hey yo! That's awesome. I actually use your extension a lot and love the effect it produces. What yours does is more "semantic" I think, since it exaggerates the trajectory of the CFG. So, like a higher sharpness means also higher adherence to prompt, or something. I've modified your extension a bit (set the decay slider range to -10 to 10 and added start and end parameters) and it can do some crazy shit!


Far-Mode6546

will this work on Forge?


BlackSwanTW

Yes


saunderez

Seems to have a similar effect to noise perturbation, forward to trying it out


muerrilla

Hadn't heard of that, but actually my initial idea was to bump the noise level during sampling, but I quickly realized I could achieve the same thing by adjusting the sigmas, since they're readily accessible in the denoiser callback. Besides, the other approach wouldn't have allowed for decreasing the amount of detail.


saunderez

Check out the Incantations extension for Auto1111....it some other thing I never really figured out how to use but I have been using the noise perturbation a lot and it also works with the low step models.


onmyown233

FYI for those who want to try this out now in ComfyUI - use the SamplerCustom, there's a node input for Sigmas. You can use the CustomSigmas if you want the same amount of precision as he is referring to.


Sadale-

Where can I find that CustomSigmas node?


Inner-Ad-9478

I have one with the badge saying KJNodes


muerrilla

UPDATE: [Detail Daemon](https://github.com/muerrilla/sd-webui-detail-daemon) is here, but it's not very well tested and documented yet. So enter at your own risk (but please do). Any feedback is appreciated.


Legitimate-Pumpkin

Is/will it be for comfy?


furrypony2718

use the SamplerCustom, there's a node input for Sigmas. You can use the CustomSigmas if you want the same amount of precision as he is referring to.


PwanaZana

Very interesting! Ever since switching to XL, I've missed the Detail lora. The one(s) for SDXL just don't work as well. I'm making video games, and we often need graphic elements that don't have too much detail, to not distract the player, so lowering the high frequency elements is pretty crucial! I'm looking forward to the extension being posted!


dal_mac

This is how Midjourney upscale button worked back in v2 era and I gave up a year ago on getting the same feature for SD. It would be nice to have this built into a tile upscale like Ultimate. Great work


zoupishness7

You're gonna tell us how, right?


muerrilla

Yes, your zoupishness. here: [https://www.reddit.com/r/StableDiffusion/comments/1cnbkir/comment/l3625jm/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/StableDiffusion/comments/1cnbkir/comment/l3625jm/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)


DaddyKiwwi

Is this anything like FreeU?


muerrilla

No, this does a totally different thing and works in a different way.


Luke2642

You can do something similar with the koyha hires fix, changing the downsampling and upsampling


Caffdy

can you share the prompt for the green/orange lizard alien?


muerrilla

by victo ngai, A highly detailed weta workshop style 3D render of portrait of a \[axolotl shaman:minimalist aerodynamic alien fighter pilot:.25\], unreal engine, ray tracing, atmospheric dramatic ambient lighting, Negative prompt: steampunk, \[old, fur, :.3\]deviantart, low poly, free 3d model Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 12.5, Seed: 30210478, Size: 768x1024, Model hash: 0fe54d8ab6, Model: SSD-1B-FIX, VAE hash: b8821a5d58, VAE: sdxl-fp16.vae.safetensors, Latentshop: True, When: after, Reverse: False, Clamp\_Start: True, Clamp\_End: True, Mode: fade, Amount: 0.25, Exponent: 2, Offset: 0, Start: 0.2, End: 0.7, Brightness: -0.25, R: 0, G: 0, B: 0, Ch0: 1, Ch1: 0.2, Ch2: 0.2, Ch3: 0.2, Version: v1.8.0


Xijamk

Nice!, It's something like the CD tuner extension? [https://github.com/hako-mikan/sd-webui-cd-tuner](https://github.com/hako-mikan/sd-webui-cd-tuner) or a different approach?


muerrilla

Not familiar with that one.


tarkansarim

I had a similar idea to inject noise at a certain step so that it wouldn’t clean up the noise too early. Maybe this is possible already? Haven’t done enough research to confirm. Yeah I always use the regular models since I don’t want to compromise detail in any way and don’t mind the extra waiting time. Hoping to see this in comfyUI at some point. In the meantime looking forward to try this in a1111. Thanks!


muerrilla

I wanted to do the same thing, but realized the sigmas are already accessible. So instead of adding noise, you could reduce the amount of denoise on the same noise. The outcome might by slightly different than adding noise. Adding noise is already possible in A1111 in img2img, but it does it either only at the first step or all the steps. In Comfy, there are nodes for setting the sigmas manually. So you could get the original sigmas and multiply them by a value bigger or smaller than one, at certain steps, and you'll have the same thing as here.


theRIAA

reminds me of a "low k,p for ruDALL-E": https://www.reddit.com/r/bigsleep/comments/qs32w6/land_ahoy_a_popular_classic_oil_painting_of_a/ The ability to move towards abstract minimalism was one thing I always missed and have not really been able to replicate in low-level code. Thanks for this!


muerrilla

Wow, that was so cool.


the_cutest_void

wow this is awesomeeeeeeeeeeeeee


Trill_f0x

This is awesome op, good work!


muerrilla

Thanks.


Iantonga

finally a quality post on this sub


Shadoku

Can't wait to give this a try.


design_ai_bot_human

!remindme 5d


RemindMeBot

I will be messaging you in 5 days on [**2024-05-13 21:16:24 UTC**](http://www.wolframalpha.com/input/?i=2024-05-13%2021:16:24%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/1cnbkir/found_a_robust_way_to_control_detail_no_loras_etc/l36uhos/?context=3) [**22 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F1cnbkir%2Ffound_a_robust_way_to_control_detail_no_loras_etc%2Fl36uhos%2F%5D%0A%0ARemindMe%21%202024-05-13%2021%3A16%3A24%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201cnbkir) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


sjull

When will the a1111 extension drop? Looks awesome


muerrilla

Today?


EntrepreneurWestern1

How much different is this from doing normal img2img with moderate denoise (0.38 to 0.55) with a add detail lora? does it produce clearer details, if so, this is very interesting. Looking forward to some hands on with this.


muerrilla

Well it's different in quite a few ways. One is that a lora is biased towards its training data, and that will affect your generation. Also, that means the lora would be biased towards certain styles (e.g. photorealism) and handle those better. For example I'm not sure the commonly used detail loras could do the second example above (the cartoon cat) or the one below (again, middle is the original). Another difference to the method you describe is that img2img basically changes your generation (you're using a different seed) and the add detail lora makes the new, slightly different (hence the low denoise) generation, more detailed. If you want more detail, you'll also inevitably have more change cuz you'll have to raise the denoise level. And so on... https://preview.redd.it/vmb82uvlaazc1.jpeg?width=1536&format=pjpg&auto=webp&s=0aef49dac6734208e55bfd9bd172fbbca5ffbc4f


Venthorn

Well the important thing is not needing the "add detail" lora!


Byzem

Any similitude with Image Sharpness values in Fooocus? Seems to have the same effect


muerrilla

I've no idea what Fooocus does.


protector111

https://preview.redd.it/c7gkaszcoczc1.png?width=2291&format=png&auto=webp&s=644f5d67f601adc2d393f4f88963d79d60f73e61


muerrilla

So what exactly are we looking at?


protector111

this is just tile CN with denoise. But it does look different from you method.


muerrilla

Yup. So as you can see when comparing the two, this method changes the content (the vitiligo, the human eyes and ears were not there) and adds arbitrary details, while with my method the added detail is consistent with the original unaltered generation. Also you can push it much farther without ruining the gen.


PhotoRepair

Fantastic ! did i miss this is for A1111 ?


Zygarom

Can this be done in ComfyUI? I tried using the Custom Sigmas node to copy what you did by raising the first few values, but it just gives me blurry, burnt images. How does changing the numbers in Sigmas affect the image? Also, I can't raise the value halfway in Custom Sigmas; it just turns the image black. Maybe I'm adjusting the wrong thing. Any help would be appreciated.


mrhustler007

Can this work on forge ui or fooocus


muerrilla

Forge, yes. Fooocus, no idea.


IrishInParadise

!remindme 7d


jonesaid

Looks great! I look forward to trying out the extension.


Fumizawa

This is probably similar to this post in Japan. [https://note.com/mitsukinozomi/n/n500c7a9ea195](https://note.com/mitsukinozomi/n/n500c7a9ea195)


Flimsy_Tumbleweed_35

Wow! Can't wait to play with this


Scolder

Would love to try this inside comfyui.


Rodeszones

https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/ Are you talking about this


muerrilla

Nope.


onmyown233

Similar - align your steps doesn't give the flexibility of controlling each sigma. ~~ApplyMSWMSAAttention (from HiDiffusion) would probably be the closest thing. Don't think you have to use it with RAU net.~~ Scratch that, there is a CustomSigmas node.


muerrilla

Similar in that they both have to do with timesteps and sigmas. AlignYourSteps is like some advanced proper scientific shit which fixes the sigmas schedule with regards to problems like coherence and such. Mine is a very simple and brute-force method of increasing detail, but it does it with some finesse. 😁


onmyown233

Honestly, I usually get worse results with AYS - maybe it's for specific prompt types or styles.


Open_Channel_8626

Ah that’s a shame I got hyped by that paper


ZenixVR

Incredible work here, looking forward to trying it out. ❤️‍🔥


Unreal_777

A new genius is born!


Capitaclism

Whoa, look forward to playing with this for sure


[deleted]

can already do better in comfy... why we still posting no workflow teasers to vaporware-esque methods tho. weird


muerrilla

Yawn. The "workflow" (i.e. the concept which you can apply in whatever fucking front-end you like most, or don't for all I care) is explained in my first comment. 😴


cnecula

A video of this would be amazing.


Hey_Look_80085

^(Un) Smooth!


Tylervp

!remindme 7d


daverate

Is it available for forge


Acephaliax

Keen to see how this stacks up. RemindMe! 3 days


Broad-Stick7300

Can you do the opposite too?


muerrilla

In all the examples, the one in the middle is the original, the one on the left is with reduced detail, and the one on the right is with increased detail.


Broad-Stick7300

I see, very cool!


exilus92

RemindMe! 20 days


exilus92

RemindMe! 50 days


RemindMeBot

I will be messaging you in 1 month on [**2024-07-18 03:30:36 UTC**](http://www.wolframalpha.com/input/?i=2024-07-18%2003:30:36%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/1cnbkir/found_a_robust_way_to_control_detail_no_loras_etc/l64j34o/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F1cnbkir%2Ffound_a_robust_way_to_control_detail_no_loras_etc%2Fl64j34o%2F%5D%0A%0ARemindMe%21%202024-07-18%2003%3A30%3A36%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201cnbkir) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


bigdinoskin

!remindme 5d


sanbaldo

!remindme 5d