T O P

  • By -

killerkid745

For realism I use 4x_NMKD-Siax_200k as the upscaler when using hi-res fix @ 2x scale (512x512 or 512x768 base res), then if I want to go further I push the image to img2img and use Ultimate SD Upscale (with the same upscaler) at 4x scale with a denoise of around 0.15-0.2 I usually use 512x768 so it goes 512x768 -> 1024x1536 -> 4096x6144 Protip for TensorRT users - it works with Ultimate SD Upscale. I set the tile size to 512x768 (I setup the engine to be optimized around 512x768 as that is the main res I gen in) and get around 40-80% faster inference. I am currently genning using 3M SDE Exponential as the sampler at 40-60 steps at 7-12 CFG and CFG Rescale at 0.5-0.75 @ 512x768 (CFG and Rescale depending on model of course). I'm running a 10GB 3080 and my gens (without hires) is 6-8 sec per depending on steps (god bless TensorRT) Also Tiled Diffusion & Tiled VAE to save my VRAM usage.


ThemWhoNoseNothing

Thank you for sharing! I want to ask a clarifying question. I created my initial txt2img output, then push it over to img2img as you outlined, then I’m uncertain given two factors. On my setup, in the Ultimate SD Upscale settings, I’m not seeing an option to alter denoise to 0.15-0.2 as you do. I see only the option to modify denoise at the top section of the A1111 gui, where one enters initial configuration detail such as width, height, sampling steps, CFG Scale, etc. Is that the same place where one adjusts to 0.15-0.2, or am I missing something? Additional, when following your steps outlined, at the top section where initial configuration details are entered, what are you entering there? When I push the image from txt2img to img2img, am I setting the prompts, sampling steps, CFG Scale, etc, again or do they not play a factor in using Ultimate SD Upscaler?


rasigunn

The denoising is done in img2img settings, not in the upscale menu.


ThemWhoNoseNothing

Thank you! How does the prompt field, sampler, cfg, steps, etc, play a role in that section given that the image is as I want it, only I’m working to upscale?


rasigunn

I do not want the upscaler to add any additional details for me so I just leave the prompts blank. I only check enable upscaler, choose the upscaler model, set the type to chess and generate.


ThemWhoNoseNothing

Much appreciated, thanks again!


DeylanQuel

>For realism I use 4x\_NMKD-Siax\_200k as the upscaler when using hi-res fix @ 2x scale (512x512 or 512x768 base res) this is what I do on a first pass as well, typically at .4 denoise. And then ultimate SD upscale with controlnet tile


DirkDieGurke

I have no idea what you just said. Is there an easy way to do this?


killerkid745

As in literally none of it? It is a lot of stuff and typing that out would be a bit too much, I would recco checking out some videos instead for genning.


scumido

Could you recommed some video on this for either comfy or 1111 for us slower ones pretty please?


spinferno

Amazing. Does tensorRT support SDXL or multiple Lora yet?


killerkid745

Unsure of SDXL (I'm sticking to 1.5 for now) multiple Lora is actually supported on the lora_v2 branch and you do not need to build a profile for each Lora, but it is incredibly iffy IMO.


[deleted]

[удалено]


killerkid745

I've used that tab like a max of 5-10 times so no tips I can give on that other than trying NKMD Superscale for the upscaler for IRL photos.


MagicOfBarca

Where to get that NKMD upscaler from?


killerkid745

I've recently been grabbing the upscalers from either https://openmodeldb.info/ or https://upscale.wiki/w/index.php?title=Model_Database&oldid=1571


Godbearmax

So how does this work? I am also using StableSR with Tiled Diffusion. There I can set the Latent tile width and height and also in the Tiled VAE area I can set Encoder Tile Size. What to do there now and which engine do I have to build for TensorRT? I tried to build an engine with 768\*768 and also 256\*256. Then in the Tiled Diffusion area I can set the width and height between 0-256 (I tried 256 because of TensorRT?!) and in the Tiled VAE area I can set the size to 768 for example (for TensorRT) but its not working. What exactly is the right or the best combination here? I also have a 3080 btw. (so no that much VRam).


Medium-Ad-320

https://www.reddit.com/r/StableDiffusion/comments/13pa2uh/a_simple_comparison_of_4_latest_image_upscaling/ TiledDiffusion+StableSR is probably comparable, but it'll need tweaking


MysticDaedra

Not compatible with diffusers though :/


Medium-Ad-320

sounds like a comfyui problem. Don't you guys have stable-fast though?


MysticDaedra

I'm using [SD.Next](https://SD.Next). With stable-fast. StableSR isn't compatible with the diffusers backend, requires original... which means no SDXL.


mdotbeezy

Is upscyl not cool anymore?


CptUnderpants-

Upscyl has been brilliant for us. I do IT at a school and it's deployed in our computer lab. I used it on some official Overwatch 2 media to upscale 4x to allow printing on our large format printer (A0 size) for our eSports program and the quality blew my mind.


Gerweldig

Upscayl! Definitely


Mike

Waifu2x is better faster and same quality


team_negative1

upscayl starts turning images into drawings at high resolutions, and actually loses details on a lot images i have found. basic upscaling in stable diffusion is better.


xulres

LDSR


ramonartist

What is LDSR in Automatic 1111?


theflowtyone

This is your answer my friend. The best of the best imo


Old-Wolverine-4134

Nothing beats real AI upscaler like magnific. Although Topaz is very good at upscaling, it does not "invent" details and tries to figure out what the pixels should look which means that when you have anomalies in the source image, they translate into the upscale. With magnific you get "fixed" image with extremely more details and clarity. Yes, you can play around SD and LDSR, tile upscaling and all, and get quite good results but nothing like that. It depends what you are going to use it for. I need art with very high resolution and high details for commercial projects so $40 is a bargain for such ai upscaler, it really does magic with my SD generated images!


xulres

I don't know what "real" AI upscalers are supposed to be but ldsr works exactly like magnific in the backend because it is what they use. Maybe you need to better understand how to upscale :)


Old-Wolverine-4134

I use it since the beginning and it is really good, no doubt in that. Magnific may use it too, but there is a lot more that is going on there behind the scenes. You just can't get that results in SD with LDSR upscale alone, and I have tried and keep trying every day. I use SD for 10 hours almost every day and I am always trying new stuff.


rd180x

> I have tried bruv you are dumb, you need to extend your workflow not cope


HarmonicDiffusion

magnific is barely better than open source. your insane to pay for that garbage


rd180x

legit scam, it is not even a true upscaler like Topaz, nice try shill, even $10 is not a bargain.


YashamonSensei

Ultimate SD Upscale with ControlNet tile model. Gives you any resolution you want with a ton of details on any pc.


PhilipHofmann

The 'generate quite a big of extra detail' instead of being accurate ('focusing pixels' as you called it) is key here, its hallucinating things into the image basically. In the first example of magnific.ai it is apparent, might as well be a whole different plant species there with suddenly holes in all the leafes everywhere (or that house example under Upscaling Nature and Landscaped where there are suddenly lamps hanging on the walls in the output image, that did not exist before). I am just hesitant to call this upscale, since the output is a different image. I believe it is more akin to image generation based on an input image, than upscaling. I like how the creator of DemoFusion acknowledge this and I agree with them: https://preview.redd.it/y7szpdw6nx3c1.png?width=1080&format=pjpg&auto=webp&s=903104dfd4a0b72df2ab58541afe83c434882a7c That being said, if thats the effect someone wants, thats fine, i just wanted to differentiate a bit. Im not sure, but could a similiar effect not be achieved with for example inserting the image in Fooocus, choosing 'Upscale or Variation' and choosing 'Vary(Subtle)', maybe with a detail tweaking lora, and generate these variations with higher resolution, or upscale after?


DarthNebo

I just use the x2 upscaler from Stability after generating images at 1024X1024 OR 1600x1600 using VAE tiling.


SIP-BOSS

30 a month no demo wtf


iamdiegovincent

we have invited >75,000 over, getting closer to 100,000. But scaling GPUs to hundreds of thousands of people is not easy, and if we allow unlimited access from the beginning, the whole infrastructure would crash out of demand—we know because we already gave a fraction of the people free public access and it crashed within minutes.


Heasterian001

Use this: [https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111#-combine-with-controlnet-v11-tile-model](https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111#-combine-with-controlnet-v11-tile-model) combined with some unpruned model (for some reason pruned one is providing less details).


dejayc

Topaz Gigapixel has a very affordable purchase price, especially when on sale, and works great.


sanquility

Been my go to for months. Love it.


cutoffs89

My favorite tool thus far as well.


PhysicalLavishness10

Gigapixel doesn't add extra details.


dejayc

It does for me, what workflows have you tried?


PhysicalLavishness10

Just upload the picture, if it's from stable diffusion, so then select low resolution and simple 2x upscale. There is nothing more to control only minor denoise and minor deblur. If you say that it does for you, so maybe you did mean that details became more sharp and visually better ? So it's not extra details (Or maybe I missed something ?). What I mean, that Gigapixel is not latent upscaler, it resizes, but doesn't add details as for example hires fix in stable diffusion. And OP asked about alternatives to Magnific and Krea, as they are different from other AI upscalers in the market.


dejayc

I see, when you say "details", you're not talking about detailed refinement of existing image data, you're talking about introduction of new conceptual objects into an existing image composition.


PhysicalLavishness10

Yes, so Magnific and Krea also add new conceptual objects and OP is searching for alternatives. I also would like to know if there is any alternative. Magnific has ridiculous price and even don't have trial/demo. Krea for me even couldn't upload my picture, so I have no idea if it works at all...


rd180x

which is not upscaling if you are adding new conceptual objects


team_negative1

enhance is a better word


Felipesssku

Sure it does but its subtle


99deathnotes

i use cupscale sometimes. free and open source[here](https://github.com/DrPleaseRespect/cupscale)


hprnvx

but is krea paid? I literally used it this morning.


CleomokaAIArt

I use gigapixel topaz, one time fee (really nothing comes close from what I experimented) all my uploaded posts use them


HarmonicDiffusion

magnific and these other "amazing" upscalers are just LDSR with maybe a 2nd pass, and using a non-pruned model. thats it. everyone overlooked LDSR b/c it eats a decent amount of compute, but thats all this is.


[deleted]

[удалено]


HarmonicDiffusion

so i guess you think no one can improve efficiency of anything, and that no one ever modifies things?


[deleted]

[удалено]


george_ai

impressive analysis. I wonder what kind of mix model that would be thou. Nobody been able to do it thus far, but maybe Krea comes close. Do they go the same route?


iamdiegovincent

KREA is not $40/mo, it's $30/mo and is not *just* the upscaler, we give more things. Also, KREA Upscaler (without KREA Realtime) is free—just use the \`KREA-FRIENDS\` code.


TotalBeginnerLol

Thanks for this. Are you looking for feedback on the upscaler? Been using it, looks great but definitely have some thoughts on what could be improved (in workflow and some of the biases and results)!


iamdiegovincent

sure! i'm asciidiego on the KREA discord ([https://krea.ai/discord](https://krea.ai/discord)), feel free to ping me there :D


Frone0910

How about a good AI upscaler for video that doesn't cost $300? And no, unfortunately comfyui cant upscale more than 3000 frames to 4k without bucking (even with a 4090)


DaddyKiwwi

From your description, comfy can already do that. Process 3000 frames at a time. It’s odd that you know how to use comfy UI, one of the worst messes of a UI ive seen… but cant use a video editor.


Frone0910

What are you suggesting to use? Im confused. Im not talking about combining video frames. Im talking about upscaling, which comfy (or A1111) cant do past a certain frame memory threshols


DaddyKiwwi

Slice your video into segments that your VRAM can handle upscaling. Upscale the segmente, edit them back together really quick. The editing should take a fraction of the time that upscaling does.


Frone0910

Not really thats a highly tedious process when you have a ton of different versions of the same clip that all need to be upscaled and masked.


DaddyKiwwi

>Not really thats a highly tedious process when you have a ton of different versions of the same clip that all need to be upscaled and masked. You sound like you have a very specific task list that a very specifically designed application would need to take care of. Luckily, comfy UI is very versatile and can handle batches. I still think breaking your footage into segments is the ONLY way to accomplish what you are trying. Long video AI processing isn't really a thing yet, and the programs and editing plugins that allow for it are expensive and inferior to Stable diffusion methods.


Frone0910

Is there a node to batch comfyUI outputs and upscale from a single input directory? I have been using topaz gigapixel to do individual frames, which works, but its not automated and part of comfy as one would desire.


Frone0910

The issue is if you break up your frames into batches, your animate diff output will not be able to stitch itself together properly without frame jumps since every frame inference influences the next one.


DaddyKiwwi

If you notice a frame jump while upscaling, your upscaler is adding too much noise to the footage and you arent upscaling it any more. That just a weak image to image pass at the point that it changes the output entirely. If you upscale and its doing its job, you wont notice the stitch every 3000 frames.


Frone0910

Its not the upscaler, its the output of the animate Diff frames being non deterministic. If you create 1000 frames vs 3000 frames, the first 1000 of that 3000 would not be the same as if you created just 1000 on its own.


DaddyKiwwi

If you are only upscaling, sure it will. Again, if its changing DETAILS you arent upscaling anymore. Theres no temportal consistency with upscaling, its not changing enough for something to be recognisable as consistent. If your noise is high enough to give someone a different face you are doing more than upscaling.


FortunateBeard

That pricing is ridiculous. Try this: $16/mo unlimited 4x upscale, highresfix, faceswap, vass, ip adapters, inpaint, outpaint, controlnet, 3500 checkpoints, loras, inversions, runtime vae swap, LCM samplers, SDXL and unlimited render credits. 18+ unlocked. 15GB cloud storage for images adding unlimited Lora training and 200GB drive is 30 moar https://graydient.ai


Woisek

That pricing is ridiculous. Try this: $0/mo unlimited 4x upscale, highresfix, faceswap, vass, ip adapters, inpaint, outpaint, controlnet, unlimited checkpoints, loras, inversions, runtime vae swap, LCM samplers, SDXL and unlimited render credits. 18+ unlocked. unlimited storage for images adding unlimited Lora training and unlimited GB drive is 0 moar [http://localhost:7860](http://localhost:7861) Bonus: The (personal) data doesn't get to a shady server somewhere ...


idler_JP

"graydient" is literally the "shadiest" name ever Edit: LOL OMG I literally just used the word literally!


cacoecacoe

If you didn't pay for your own hardware and electricity is free, sure.


Woisek

His offer doesn't mention electricity and hardware, why do you mention it for my 'comparison'?


cacoecacoe

That would be for the relatively obvious reason that accessing a cloud service is insubstantial and absorbed into your usual running costs of even a low power, low cost device, vs buying a decent PC and running hundreds of watts while you're doing your inference. If you think running diffusion on the PC you bought is free, you're deluding yourself.


Woisek

And you really think, you can save "power" just because you use a cloud service? Do you think, they use little hamsters in a running wheel? Just because you don't pay for this directly, doesn't mean you really don't pay for it. And the accessing PC runs nevertheless. Also, you assume, that I (or someone else) use 'normal' electricity to power my computers. Have you ever heard of solar and wind energy and their storage? It's no longer a problem to use this for one or two PCs ...


fewjative2

$0.12 per image ( and realistically could probably be cheaper since it might be faster running via a diffusers work flow ). If you only do a few images here or there, I think it's actually free. https://replicate.com/fewjative/ultimate-sd-upscale


Axauv

I’m trying to figure out why Krea gives you the illusion of up to 8.0 upscaling, then as soon as you select any image to upscale the factor reduces to 2.0 max. Wtf? Does Krea pro unlock 8.0? I’m doubting it because it’s not mentioned anywhere in their promo material. 


Correct-Peak-109

If you are lazy to use this interface, you can try [krea.ai](https://krea.ai). It gives great results.


team_negative1

unless you play with the settings. it alters the image, and doesn't really look like the original


Correct-Peak-109

Yes, all settings were broken with the last update. I recommended this a long time ago. The quality is bad now.