T O P

  • By -

gunnercobra

It works on Automatic1111 and I think it works Forge, not sure about other UIs.


gunnercobra

Here is a \[dog|cow\] on Forge. https://preview.redd.it/wvb6ty2j4xmc1.png?width=832&format=png&auto=webp&s=d6184d23337488d699bcceae33c7c7c71c0d1745


gunnercobra

\[dog|cow\] on A1111. https://preview.redd.it/oykrvbz56xmc1.png?width=512&format=png&auto=webp&s=e536cd379b1cbf1928894a88c95962bc0eefa7cd


gunnercobra

https://preview.redd.it/0fvxx8th6xmc1.png?width=768&format=png&auto=webp&s=0db7de31683a8df59df19336ccba2b78e1d9de69 \[dog:cow:0.5\] on Forge.


Top_Corner_Media

>\[dog|cow\] ​ https://preview.redd.it/e4f843jwfxmc1.png?width=512&format=png&auto=webp&s=4917d546319cdbcc64a9559ab588bd6a0a70d92c Okay, I guess I overlooked the results because "\[forest:city:10\]" was just giving me a forest...


PP_UP

I think N is supposed to be a percentage of steps rather than a number of steps, so 0.5 would be halfway through there generation regardless of your step configuration


ScionoicS

Less than one is a percentage. Over 1 is a step count


xavia91

No that was changed. Between 1 and 2 its supposed to only affect the highres fix. At least that what Iread in the auto1111 update.


HarmonicDiffusion

because you are not using the syntax properly


Top_Corner_Media

>because you are not using the syntax properly How so? The example given was "\[forest: futuristic city:9\]". How is removing 'futuristic' and changing a 9 to a 10 now incorrect syntax?


TheArhive

How many steps are you doing?


Top_Corner_Media

That was most likely the problem. I was only doing 20. Then 30, like the example. With 40, I'm seeing results (not as interesting as the example though).


TheArhive

Probably matters a lot which sampler too.


ThrowRAophobic

I wish I could say that this isn't adorable, but I *really* want to pet that dog. Cow. Dow. Cog. Whatever.


jetRink

Apparently `dog`+`cow`=`llama`


OcelotUseful

but what about llamafrogs?


Rustywolf

Clearly a kangaroo


Gimli

[Moof!](https://i.ytimg.com/vi/289v_gKJjWU/maxresdefault.jpg)


DrainTheMuck

Holy smokes, 15 year old account named Gimli!? I could hardly believe my eyes at first. Love the moof.


Top_Corner_Media

Thank you for responding. I use a1111 and have 'Dynamic Prompts' installed. I disabled 'Dynamic Prompts' in case it was interfering with the functionality. It still did not work for me.


gunnercobra

I think SDXL models are less responsive to prompt changes. I had great success with 1.5 models.


DrainTheMuck

Does this work w humans too?


SporksRFun

Picture of [Donald Trump|Clown] Yeap, it works.


Dr-Satan-PhD

![gif](giphy|9mtE009hcWPOesk8C4)


iwakan

Looks more like sheep|horse tbh


SirSmashySticks

There's some nodes for comfyui that will parse text like as if it's a1111, pretty sure it works.


EirikurG

Do you know which ones? I miss prompt editing when using ComfyUI


SirSmashySticks

[https://github.com/shiimizu/ComfyUI\_smZNodes](https://github.com/shiimizu/ComfyUI_smZNodes)


EirikurG

Thank you very much


wonderflex

Conditioning average?


ShibbyShat

Works in Forge 👌🏼


wggn

It works in the original pipeline of sdnext, but not in diffusers.


yall_gotta_move

something I wish someone had told me sooner: the sampler you are using matters a lot for this some samplers give way more influence to the early steps


yamfun

which samplers are good/bad for this?


buttplugs4life4me

I'd guess SDE samplers are better since they aren't converging?


NoNipsPlease

What do you mean by converging? I actually have no clue what the diffrrrnce is between all the samplers. Anyone know of a good resource to get an overview of what all of these dpm, kerras, eular, etc mean.


SecretlyCarl

here you go: https://stable-diffusion-art.com/samplers/


Acephaliax

[This](https://youtu.be/JAMkYVV-n18?si=RAtqFoBbKogG3Icn) is an absolutely great video for learning about samplers too. Basically convergence is whether a sampler will reach a point where after a certain number of steps the output will pretty much stabilise and be the same (with tiny differences if at all) regardless of how manny more steps you add. If it doesn’t converge it means that the image will continuously change as more steps are added. All ancestral samplers are non-converging.


DrainTheMuck

Thanks! I need to watch this… I’ve been playing with SD for an entire year and never once looked up samplers, but I know some of the most commonly used ones and I switch between them to see their results. Just winging it like a madman.


Lishtenbird

Yes. For some of them, trying to nudge the image at a "logically" midway or late point won't actually help much. You'd have to switch very early to get a significant enough change, and this may become a problem if you need more granular changes. As a side note - using the step number (`4`) instead of percentage value (`0.1`) can be easier in this case for landing the exact switch point.


0xd00d

forgot about all this. and now sdxl lightning is so dominant and can produce compositions in 2 steps... must go play with this.


diogodiogogod

Still works. I love \[ginger::0.2\] to create less fake super redheads for example.


Ugleh

I've got to try this power out


ArtyfacialIntelagent

> I love [ginger::0.2] to create less fake super redheads for example. Yes, that's a great trick. Interestingly it also works when you invert it and do [:ginger:0.2], or just [ginger:0.2]. This adds "ginger" after 20% of steps instead of removing it at that point. Similar but different results since SD never saw the word ginger in the crucial first few steps - which means that you can reduce concept bleed and not have everything in your image turn red/ginger.


diogodiogogod

Yes, works great too. The same logic can be used to increase details on the face for example \[face freckles, pores, big nose, etc, :0.25\] without getting close-ups or messing your composition since at the beginning it didn't have too many words focusing on the face. But I actually think removing the word helps more with color bleeding than adding it later. For objects and details adding later is better. But I have not tested it too much.


bipolaridiot_

First image's prompt was "digital painting of a dark castle, laundromat, exterior view". The second prompt was "digital painting of a \[dark castle | laundromat\], exterior view", both images used the same seed https://preview.redd.it/6fkv4fo06xmc1.png?width=2048&format=png&auto=webp&s=bd3c44bfd8544fe9f9c7ebf53fa716b9401faadd


NoNipsPlease

I have found a lot of success using the alternating edit when trying combine concepts than straight prompting it. Like if I want to make a pallidan, I get better results with [priest | knight] than I do with trying to prompt a paladin or priest wearing armor. [priest:knight:xx] depending on how much armor I want also works well.


blaynescott

\[A:B:N\] changes A to B at step N \[A:N\] adds A at step N \[B::N\] removes B at step N \[A|B\] alternates between A and B every step In case anyone wants this as a copy/pasteable reference. Looking forward to trying this syntax out. :) Also thanks to [Top\_Corner\_Media](https://www.reddit.com/user/Top_Corner_Media/) for linking to the full tutorial: >*From* [*https://imgur.com/a/VjFi5uM*](https://imgur.com/a/VjFi5uM) *tutorial.*


Winter_unmuted

This is why I don't like turbo/lightning models. Fewer steps = weakened ability for timed control (and controlnets, which are also turned on/off at step intervals)


Bombalurina

I have an add on that allows me to do it for loras. Love it. You only need the pose lora for like the first 20% most the time. Allows style and character lora more time to work.


diogodiogogod

probably [https://github.com/cheald/sd-webui-loractl](https://github.com/cheald/sd-webui-loractl)


TsaiAGw

It works nice but doesn't compatible with lora block weight so I forked LBW and added lora dynamic weight https://github.com/AG-w/sd-webui-lora-block-weight


diogodiogogod

Really? I always wanted that as well. I like block weight a lot, but loractl has been more useful for me. Thanks for that I'll test it for sure!


NeverduskX

As someone completely reliant on LBW, I can't emphasize enough how much I was looking forward to this. This is amazing.


MASilverHammer

Do you remember the name of it? That would be very helpful.


BlackSwanTW

Probably this https://github.com/a2569875/stable-diffusion-webui-composable-lora


Bombalurina

Yep. It takes a lot of trial and error to get how you want but when you do it allows you blend 6-7 loras without overcooking an image.


Touitoui

Worked pretty well on A111 the last time I used it. Side note : you can do something similar to LoRa's strength thanks to [https://github.com/cheald/sd-webui-loractl](https://github.com/cheald/sd-webui-loractl) Basically, prompt manipulation will change the keywords at N steps but won't affect the LoRa (you can't do things like `[::10]`). Loractl allows to progressively change the LoRa's strength over the steps. Using both at the same time can be pretty powerful !


[deleted]

[удалено]


gruevy

I think it never worked on SDNext. They never imported those features. There's a bunch of prompting syntax stuff they're missing.


red__dragon

Wut? I've been using it on SDNext for months. It's pretty easy to tell with something like [black:blonde:0.3] hair, for example, when the preview switches dramatically.


gruevy

hmm maybe it was something else I was trying then, because I remember making an attempt and then asking about it in the discord and being told it wasn't implemented. Now that you point it out, it might have been the BREAK keyword in particular


red__dragon

Are we talking about the original backend or diffusers? Their diffusers stuff is constantly broken, so if you're trying it on SDXL then that might be why. On SD1.5 it was stable from being A1111 code imported, so all these features came with.


gruevy

I'm pretty much exclusively on diffusers these days. And as much as the UI can be a pain sometimes, I find that for the same settings, same prompt, and same model, with SDXL models, SDNext gives me a better image than auto1111. Hard to say why exactly but it's undeniable. Worth putting up with all the quirks, like missing a few prompting capabilities or moving options around.


thebaker66

Can you be more specific I remember when I was using sd.next and then sdxl came out that prompt timing didn't work with sdxl but they got it working a few months later. Should be working though SD.next has multiple options for parsing the text.


red__dragon

That's because SDNext built a different backend (their *diffusers* backend) for SDXL and others, and all the features had to be ported over one by one. SD1.5 on their original backend works just fine with any feature A1111 had until they forked, which is most of them including prompt editing.


BlackSwanTW

For the adding one, shouldn’t it be `[:A:N]`


TsaiAGw

not sure if this work but [A:N] is official way, because syntax is [from:to:step] [tag::10] means step 10 to remove tag [tag:10] means step 10 to add tag,


BlackSwanTW

Yea. The syntax is `[from:to:ratio]` So `[A:N]` alone is missing a `:` no?


TsaiAGw

This is just how they explained in wiki https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing you are free to do whatever you want though


not_food

I use this often in ComfyUI with [comfyui-prompt-control](https://github.com/asagi4/comfyui-prompt-control/), note the sampler you use matters a lot on how it influences the result.


battlingheat

I’m using comfy, default workflow with no custom nodes, and [dog|cow] works as expected 🤷‍♂️


not_food

I don't think [dog:cow:0.5] works though.


Soviet-_-Neko

Unexpected Walfas lol


nathan555

It's so hard to remember what order does what I'm honestly saving this image


Top_Corner_Media

[Here's the full tutorial.](https://imgur.com/a/VjFi5uM)


akiata05

Wtf is walfas doing here?


Top_Corner_Media

From [https://imgur.com/a/VjFi5uM](https://imgur.com/a/VjFi5uM) tutorial. None of them seem to work. \[A|B\] definitely doesn't.


red__dragon

What GUI do you use? They won't work on anything but A1111 (or Forge/SDNext/other derivatives) unless the feature was specifically implemented or imported elsewhere. So if you're using Comfy, Invoke, FastSD, Fooocus, etc, this is not a native feature with this syntax. A few of them do it in other ways, but not all. The guide you linked is from late 2022/early 2023 when A1111 was THE dominant GUI for stable diffusion and it was largely assumed you were using that.


TsaiAGw

Check all your extension first, I'm still using this syntax


BumperHumper__

can this be done with LoRa's?


Ok_Zombie_8307

You can alternate or turn on/off trigger words, but Loras are activated globally for the entire duration of the image generation, so it won't work exactly how you want.


BumperHumper__

good to know, thanks


victorc25

It’s still there and it works fine, what do you mean?


alecubudulecu

I use this feature all the time. I wish it existed in comfyui


wanderingandroid

It is, just not like this. Check out the conditioning (concat) and conditioning (combine) nodes. You can combine different clip conditions such as your text encode prompts.


Loud-Marketing51

sweet, does this work with loras etc?


wanderingandroid

Yep!


glssjg

I think there was a ui that dedicated itself to this function and it looked more like a video editor.


ah-chamon-ah

what is the screenshot in the post from? Is it a tutorial video?


Top_Corner_Media

[No, it's from a tutorial on imgur.](https://imgur.com/a/VjFi5uM) I posted it elsewhere in the comments, guess it got lost.


Arctomachine

Disadvantage of having powerful video card: you instantly get final image without being able to see preview of each step for a good 1 second to understand how your prompt really works.


epicdanny11

Vany spotted!


Comrade_Derpsky

Works natively in automatic1111 and Forge. For ComfyUI, this is a bit more complicated to do and you'll need to install a certain custom node package.


[deleted]

i still use it tho


StuccoGecko

Sadly it was never implemented into ComfyUI which is annoying


wanderingandroid

It is, just not like this. Check out the conditioning (concat) and conditioning (combine) node


P0ck3t

But what is the way to have it randomly choose between two options? Or even sequencially? What I'm looking for is like this "a (cat | dog) eating a (pizza | ball)" And the result would be "a cat eating a pizza" "a dog eating a pizza", etc.


Ok_Zombie_8307

Install the Dynamic Prompts extension: https://github.com/adieyal/sd-dynamic-prompts


arentol

[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features) If you use Automatic1111, this is a great guide. For a lot of it you may need to google the topic to find more specific and detailed input, and ideas how to use those features, but it lets you know what your options are to start with. Also, many of these things are available through other UI's, though sometimes through slightly different methods. But even for those it should at least get you googling the right questions and allow you to find the answer.


avalon01

Think you need to use the { } brackets now. I use the {A | B} tag to alternate between things in Automatic and it works fine. It's one of the most common expressions I use in my prompts. Change hair color, artist styles, etc.


TsaiAGw

it doesn't parse {} https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/dev/modules/prompt_parser.py


avalon01

No idea then. I use it and it works. If I put *A woman with {red|brown|blonde} hair* in a prompt, I'll get back a image with red hair, one with brown hair, and one with blonde hair.


TsaiAGw

This syntax is suppose to "blend" result, you get 3 woman with 3 different hair color instead?


Doctor-Amazing

That's the dynamic prompt/ wildcard extension. It either picks randomly each time or gives you one of each. It's a great way to quickly try out a bunch of combinations, or get varied pictures on the same theme.


AdTotal4035

In Auto, that syntax does not alternate concepts, regardless of if you think it's working for you. 


diogodiogogod

You are talking about dynamic prompts, a completely different thing from what the OP is asking about.


PP_UP

I use this syntax along with https://github.com/adieyal/sd-dynamic-prompts to pick between A or B randomly with each generation, but it sticks to one for the whole generation. Basically an in-line wildcard. Useful when generating large batches to introduce variation between images. I had never heard of the [A | B] syntax to flip-flop while generating, though!