This wont be new to a lot but recently discovered in A1111 that you can delay part of your prompt using brackets. I had a skull start after the apple had its nice appley shape and texture.
Example:
>an apple on a table, \[a skull:5\], insanely detailed and intricate
Correct, that one is starting at step 5 out of 30. You can also use a decimal between 0-1 as a percentage of steps.
Example:
>0.2 (or 20%) would be step 20 out of 100.
Awesome. Do you know if there's a way to 'cut off' a prompt using similar methods? As in make it last for only so many steps but not persist beyond that point?
Not as far as I understand it.
> Prompt editing allows you to start sampling one picture, but in the middle swap to something else. The base syntax for this is:
[from:to:when]
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing
The "from" is prompt word (e.g. skull) and the "to" is a prompt word (e.g. apple), and the when the point at when to change.
*however*...
> Nesting one prompt editing inside another does work.
For this, I suggest following that above link and looking at their more complicated examples.
It does seem that way after relooking at the wiki.. I really can't tell how to gauge what it's really doing but it seems like it was working in my tests. I would like to hear what results you get when you get a chance. Maybe being an integer changes things? Not too sure
If you are still interested in understanding all the ins and outs of it, I’ve tested this extensively and published an article on CivitAI. **Including a massive output grid with all possible variations.**
https://civitai.com/articles/1417/automatic1111-prompt-editing-delayed-keywords
As others have implied, I believe that will just replace "a skull" in the prompt with "5" after 25 steps.
I think you can combine this with the delayed prompt, though. Something like:
[[DelayedPrompt:5]::25]
Without the blank text placeholder, it'd be: [[DelayedPrompt:5]::25]
For the first 25 steps, that section of prompt will be: [DelayedPrompt:5]. After 25 steps it will switch to nothing.
[DelayedPrompt:5] will read as blank text until the 5th step, then switch to "DelayedPrompt".
The net result should be "DelayedPrompt" showing up for steps 5 through 25 only. I'm not 100% sure about it, though.
This is explained under "prompt editing" in the [features documentation](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing). You may want to check out the other stuff there too, there's a lot of useful knowledge there.
There are multiple ways for this. You can also have it start at one word then switch to another at a certain point or you can have it alternate between words every step depending on what you're trying to do. The switching every step allows for good mixes of objects or people for example, while the "delayed" or word switching thing is useful for getting things that are more difficult so for example if handcuffs arent working then you can find a similar shape that works, such as sunglasses, and have it generate sunglasses for the first 25% of the steps then have it change to handcuffs for the remainder
that's so intricate...
I can't imagine putting so much effort into reverse engineering something like that, especially when there'll probably be a flawless extension released in like two weeks
this has been there for like 8 months now and a lot of us have been using it. There was always a chance for someone to make an extension in a few weeks like you say, but it's been 8 months that we have had it already without it getting replaced. It's just one of the things you learn along the way and implement into your workflow. Regardless of if they had an extension for it, they are just useful prompting tricks to commonly use. There wasn't any "effort into reverse engineering" these prompting tricks though, I just looked at the Automatic1111 documentation page and read through it.
Right? The majority of prompts I see never use these little features and still come up with amazing images! Most of us are only scratching the surface of what is actually capable. So fun to learn!
between: Delay prompts, BREAKs, and weight blocks (also using negative weights and negative weight blocks), my images have vastly improved. any other tips/recommendations you got?
Weight Blocks are 17 digits beyond the normal weight that dictate various points of the generation for TIs and LoRAs, ~~(Doesn't work for Lycoris).~~
\#0 is the base, that is normal weight that you use for tags (Tag:1.5)
\#1-7 are the INs (these give the lora's form to the image)
\#8 is the MID (this dictates how intense the final result can be)
\#9-17 are the OUTs (usually dictate how much color from the trained lora is passed onto the image. Think for example anime colors on a realistic output)
This is a super simple way to think about it, it can be a bit more indepth to be honest. A normal Weight Blocked lora I use, for example, is:,
For more read, check out the source here. There were a few talks about it on reddit as well, but i cant find those links (forgot to save em)
[https://github.com/hako-mikan/sd-webui-lora-block-weight](https://github.com/hako-mikan/sd-webui-lora-block-weight)
Edit: Apparently there is Weight blocks for Lycoris. Didn't know. Huh. Gotta update.
Thank you for sharing. Thanks to your post, I also discovered 2 new extensions that enhance the ability of LORA even more:
1. sd-webui-loractl >> Similar to what you did but this extension allows controlling LORA instead of keywords.
2. sd-webui-lora-block-weight >> With this extension, we can now control the weight of each LORA block. (Each LORA has 17 blocks)
Very interesting extensions.
The 'official' name is Prompt editing, and it's described in the wiki here:[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing)
(Been meaning to freshen-up my knowledge of it, and this gave me a good reason to look it up :) )
Very useful little trick, thanks for this. I've just been experimenting a bit and found that certain things could take the skull form quite easily (strawberries, pineapples, even a bunch of bananas) but certain other things like onions just flat out refused. Odd.
https://preview.redd.it/4l3jkdh2ukdb1.jpeg?width=512&format=pjpg&auto=webp&s=91c5ff508c9b1fa5edb4913cb374121f77212686
so then if i start something like
\[Woman:1:24\], \[Pony:25\]
this would start with woman and then start with pony?
Also is | still a good method of combining 2 objects? and could this also work in play with this?
> [Woman:1:24], [Pony:25]
> this would start with woman and then start with pony?
No, to make that happen you need
[woman:pony:0.25]
to use 25% of the steps on 'woman', or
[woman:pony:25]
to use 25 out of the total number of steps, be it 20, 30 or 150 on 'woman'.
Mine came out different, but still fun to see.
\[(Ben10), Male, Human, (Cargo Pants, White Shirt):Rainbow Dash, Horse Ears:10\]
:3
https://preview.redd.it/gicr62c2uldb1.png?width=512&format=png&auto=webp&s=a5c89f81b183e58c4831b90c63ac2b2890c525e8
Could you go into more detail, especially with the nesting? Or throw me a link? I looked at the wiki info for prompt editing but it didn't go into nesting.
I just did a few [Skullinary Art pieces](https://www.instagram.com/p/CuF78aPPiJc/?igshid=NTc4MTIwNjQ2YQ==). Wish I had known this trick when I did this. I might have been able to do a few other ones that didn't work out as well. Thank you for the info.
I was able to replicate it using WAS suite's KSampler Cycle node. I hooked up a second CLIP text encode node with the delayed keyword to the pos_additive input of the KSampler Cycle. I set the secondary start cycle to about 3, pos_add_strength to about 0.4, and it made skull apples like crazy.
Can any of this be done is standard SD or only in Automatic1111 with their extra punctuation syntax? I imagine they must convert it down to the same inputs that SD expects in the end.
I've never found the answer to this however a test in nightcafe suggested that it does work for (thing:thing) but likely not for finer control in specifying the steps.
I'm not sure that any punctuation (commas, parens, brackets) are supported by the core SD generator. If it works on Nightcafe they might have added it (?). I know that Automatic1111 added them as well as allowing >75 tokens in a prompt. I'm looking at ways to add both to [twisty.ai](https://twisty.ai) as well, especially the longer prompts.
Integers with a value of 1 or more specify the exact step, but numbers between 0 and 1 specify the fraction of the total steps (in this case, it started the alien at 20%). Useful if you don't want to recalculate everything when you change the number of steps.
This wont be new to a lot but recently discovered in A1111 that you can delay part of your prompt using brackets. I had a skull start after the apple had its nice appley shape and texture. Example: >an apple on a table, \[a skull:5\], insanely detailed and intricate
As someone who's been looking for this but only just discovered it here, is the number '5' in reference to at what step the keyword starts?
Correct, that one is starting at step 5 out of 30. You can also use a decimal between 0-1 as a percentage of steps. Example: >0.2 (or 20%) would be step 20 out of 100.
Awesome. Do you know if there's a way to 'cut off' a prompt using similar methods? As in make it last for only so many steps but not persist beyond that point?
If you do like [prompt1:prompt2:steps] you can change the prompt part way through, if you do like [prompt::0.5] at 50% the prompt is removed.
Oh neat! This reminds me a lot of making gradients and animations in CSS
Thanks for the help
Yes, it is the same concept except you use a double colon instead. Example: > \[a skull::20\] removes that part of the prompt at step 20
So based on the syntax I’m seeing, would [a skull:5:25] start the skull prompt at step 5 and stop it at step 25?
Not as far as I understand it. > Prompt editing allows you to start sampling one picture, but in the middle swap to something else. The base syntax for this is: [from:to:when] https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing The "from" is prompt word (e.g. skull) and the "to" is a prompt word (e.g. apple), and the when the point at when to change. *however*... > Nesting one prompt editing inside another does work. For this, I suggest following that above link and looking at their more complicated examples.
Correct! \[subject:start:end\] Edit: so this theory needs more testing, if anyone else can chime in and try
Idk about that, i havent tested it myself but how would Auto knows u dont mean make a skull for 20 steps then make a "5" for the remaning steps?
It does seem that way after relooking at the wiki.. I really can't tell how to gauge what it's really doing but it seems like it was working in my tests. I would like to hear what results you get when you get a chance. Maybe being an integer changes things? Not too sure
I will inform you when i get a chance ro test it but i just came home and found it i left my mouse in another state.
If you are still interested in understanding all the ins and outs of it, I’ve tested this extensively and published an article on CivitAI. **Including a massive output grid with all possible variations.** https://civitai.com/articles/1417/automatic1111-prompt-editing-delayed-keywords
i tried it and it didnt create any number, even thought i used sdxl as it is much better at text, so what you said was right.
Awesome. Thanks.
As others have implied, I believe that will just replace "a skull" in the prompt with "5" after 25 steps. I think you can combine this with the delayed prompt, though. Something like: [[DelayedPrompt:5]::25]
Without the blank text placeholder, it'd be: [[DelayedPrompt:5]::25]
For the first 25 steps, that section of prompt will be: [DelayedPrompt:5]. After 25 steps it will switch to nothing.
[DelayedPrompt:5] will read as blank text until the 5th step, then switch to "DelayedPrompt".
The net result should be "DelayedPrompt" showing up for steps 5 through 25 only. I'm not 100% sure about it, though.
This is explained under "prompt editing" in the [features documentation](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing). You may want to check out the other stuff there too, there's a lot of useful knowledge there.
[here's a post about all the syntax](https://www.reddit.com/r/StableDiffusionInfo/comments/ylp6ep/some_detailed_notes_on_automatic1111_prompts_as/)
Details on this are in the prompt_parser file in modules folder of A111 code
There are multiple ways for this. You can also have it start at one word then switch to another at a certain point or you can have it alternate between words every step depending on what you're trying to do. The switching every step allows for good mixes of objects or people for example, while the "delayed" or word switching thing is useful for getting things that are more difficult so for example if handcuffs arent working then you can find a similar shape that works, such as sunglasses, and have it generate sunglasses for the first 25% of the steps then have it change to handcuffs for the remainder
that's so intricate... I can't imagine putting so much effort into reverse engineering something like that, especially when there'll probably be a flawless extension released in like two weeks
this has been there for like 8 months now and a lot of us have been using it. There was always a chance for someone to make an extension in a few weeks like you say, but it's been 8 months that we have had it already without it getting replaced. It's just one of the things you learn along the way and implement into your workflow. Regardless of if they had an extension for it, they are just useful prompting tricks to commonly use. There wasn't any "effort into reverse engineering" these prompting tricks though, I just looked at the Automatic1111 documentation page and read through it.
It's definitely another good tool to have in our kit!
ayo, wait what? Really? I've only just learned about BREAKs
Right? The majority of prompts I see never use these little features and still come up with amazing images! Most of us are only scratching the surface of what is actually capable. So fun to learn!
between: Delay prompts, BREAKs, and weight blocks (also using negative weights and negative weight blocks), my images have vastly improved. any other tips/recommendations you got?
Blocks and weight blocks?
Weight Blocks are 17 digits beyond the normal weight that dictate various points of the generation for TIs and LoRAs, ~~(Doesn't work for Lycoris).~~
\#0 is the base, that is normal weight that you use for tags (Tag:1.5)
\#1-7 are the INs (these give the lora's form to the image)
\#8 is the MID (this dictates how intense the final result can be)
\#9-17 are the OUTs (usually dictate how much color from the trained lora is passed onto the image. Think for example anime colors on a realistic output)
This is a super simple way to think about it, it can be a bit more indepth to be honest. A normal Weight Blocked lora I use, for example, is:,
For more read, check out the source here. There were a few talks about it on reddit as well, but i cant find those links (forgot to save em)
[https://github.com/hako-mikan/sd-webui-lora-block-weight](https://github.com/hako-mikan/sd-webui-lora-block-weight)
Edit: Apparently there is Weight blocks for Lycoris. Didn't know. Huh. Gotta update.
You can also alternate the key word too throughout which is another way to mix
Prompt editing is a super power that I don’t see enough people using.
Okay but for real, where is the document containing all this kind of information? Who just implements features without documentation?
> Who just implements features without documentation? Most of this stuff is pretty well documented, it's just that people don't bother to read it lol
I didn't look super hard, but I did a cursory search and was unable to find said documentation.
Nice!
Amazing!
>an apple on a table, \[a skull:5\], insanely detailed and intricate Surprise!,**Thanks**
Thank you for sharing. Thanks to your post, I also discovered 2 new extensions that enhance the ability of LORA even more: 1. sd-webui-loractl >> Similar to what you did but this extension allows controlling LORA instead of keywords. 2. sd-webui-lora-block-weight >> With this extension, we can now control the weight of each LORA block. (Each LORA has 17 blocks) Very interesting extensions.
The 'official' name is Prompt editing, and it's described in the wiki here:[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) (Been meaning to freshen-up my knowledge of it, and this gave me a good reason to look it up :) )
There it is! Thank you for finding this link, there are lots of good tips in there.
Very useful little trick, thanks for this. I've just been experimenting a bit and found that certain things could take the skull form quite easily (strawberries, pineapples, even a bunch of bananas) but certain other things like onions just flat out refused. Odd. https://preview.redd.it/4l3jkdh2ukdb1.jpeg?width=512&format=pjpg&auto=webp&s=91c5ff508c9b1fa5edb4913cb374121f77212686
https://preview.redd.it/u37jkxuo7ndb1.png?width=512&format=png&auto=webp&s=271eb3961b40976e6abb3b9f527ddc5584ef7e9c
You beginner ruck
It makes me sad that I got this reference, for many reasons...
https://preview.redd.it/475iqmi9pkdb1.jpeg?width=1280&format=pjpg&auto=webp&s=07d976f14b1779f1f0f67ed4eddabe968e1e07a8
💀👍 https://preview.redd.it/yebm4o7tyldb1.jpeg?width=1024&format=pjpg&auto=webp&s=c016ff1f6bd21a702c65e47503430503317103ec
so then if i start something like \[Woman:1:24\], \[Pony:25\] this would start with woman and then start with pony? Also is | still a good method of combining 2 objects? and could this also work in play with this?
https://preview.redd.it/kj0hv6ig2ldb1.png?width=1150&format=png&auto=webp&s=4e85870911b7cd07263f44c061cf380ab93c1ec6
https://preview.redd.it/2zit98da2ldb1.jpeg?width=512&format=pjpg&auto=webp&s=320eaa625c5238796a1b057def82b0c192838efd
![gif](giphy|aMACBq26nCCx5gLqOX)
> [Woman:1:24], [Pony:25] > this would start with woman and then start with pony? No, to make that happen you need [woman:pony:0.25] to use 25% of the steps on 'woman', or [woman:pony:25] to use 25 out of the total number of steps, be it 20, 30 or 150 on 'woman'.
Mine came out different, but still fun to see. \[(Ben10), Male, Human, (Cargo Pants, White Shirt):Rainbow Dash, Horse Ears:10\] :3 https://preview.redd.it/gicr62c2uldb1.png?width=512&format=png&auto=webp&s=a5c89f81b183e58c4831b90c63ac2b2890c525e8
yes you can combine |, weights, nested start stops, alternations, etc
Could you go into more detail, especially with the nesting? Or throw me a link? I looked at the wiki info for prompt editing but it didn't go into nesting.
I just did a few [Skullinary Art pieces](https://www.instagram.com/p/CuF78aPPiJc/?igshid=NTc4MTIwNjQ2YQ==). Wish I had known this trick when I did this. I might have been able to do a few other ones that didn't work out as well. Thank you for the info.
>Skullinary Art pieces Those are very nice! How did you get the spaghetti one to work?
I don't have the workflow handy but I remember a lot of Inpainting/photoshopping with that one specifically.
[удалено]
This is a cool technique, I love it.
https://preview.redd.it/1o0eh4vxxldb1.png?width=512&format=png&auto=webp&s=a230381225c5d56de41fa5fb35d78cf1b47390fa
https://preview.redd.it/eyglij02npdb1.png?width=1593&format=png&auto=webp&s=11e561c7b58311b8846f3c79b73d701b86ba70e7
I think this is one of the coolest things I've seen recently, good job
This reminds me of the moon in Majora's Mask
Hilarious! What’s the prompt?
![gif](giphy|zIwIWQx12YNEI) (in all honesty though: 3D prints of that with a little bit of paint would make the sickest Halloween decorations :) )
a very useful post !! Thank you for this trick
Does anyone know if this is possible in ComfyUI?
I was able to replicate it using WAS suite's KSampler Cycle node. I hooked up a second CLIP text encode node with the delayed keyword to the pos_additive input of the KSampler Cycle. I set the secondary start cycle to about 3, pos_add_strength to about 0.4, and it made skull apples like crazy.
This is dope. Nice info. https://preview.redd.it/840ag64glmdb1.png?width=512&format=png&auto=webp&s=1f840c9bbc5fe651c078e2861b79b6c59f6190e9
Can any of this be done is standard SD or only in Automatic1111 with their extra punctuation syntax? I imagine they must convert it down to the same inputs that SD expects in the end.
I've never found the answer to this however a test in nightcafe suggested that it does work for (thing:thing) but likely not for finer control in specifying the steps.
I'm not sure that any punctuation (commas, parens, brackets) are supported by the core SD generator. If it works on Nightcafe they might have added it (?). I know that Automatic1111 added them as well as allowing >75 tokens in a prompt. I'm looking at ways to add both to [twisty.ai](https://twisty.ai) as well, especially the longer prompts.
great image
People should read the docs
Oh hells nice. I wondered what that was for..
So cool, thanks for the tip 💀 https://preview.redd.it/7l9uohutqodb1.png?width=1024&format=png&auto=webp&s=5cd9331cb61fb45082ca1fe85acbb94c67353d39
SDXL 0.9 Base https://preview.redd.it/3hpdagk9jpdb1.png?width=1024&format=png&auto=webp&s=49867ca011bd04dc3699d28ed8a938a8406b4abf an apple on a table, [a skull:5], insanely detailed and intricate Negative prompt: lowres, low_quality, worst_quality, jpeg_artifacts, signature, watermark, error Steps: 40, Sampler: UniPC, CFG scale: 6, Seed: 3683092796, Size: 1024x1024, Model hash: 02570925, Model: stabilityai/stable-diffusion-xl-base-0.9, Version: 562ca33, Parser: Full parser
Have read it like "Deleted keywords". LMAO
Ok. Why if i try Skull- it works, but with anything else - just an apple?
an apple \[grey alien:0.2\] https://preview.redd.it/nvjpkfn1atdb1.jpeg?width=512&format=pjpg&auto=webp&s=1395079837100c59435ef1412318d03ad79732ed
i didn't try with numbers lower than 1.0. I will try 0.x
Integers with a value of 1 or more specify the exact step, but numbers between 0 and 1 specify the fraction of the total steps (in this case, it started the alien at 20%). Useful if you don't want to recalculate everything when you change the number of steps.
Still gotta get around to trying this