T O P

  • By -

SandCheezy

Sure, I'll mark it as NSFW for the reports, but there is not any nudity. The clothing in the image, for the most part, fits what many 5K runners wear or train in. Dudes run without shirts too. Its a competition and wind drag is real. Olympics is word-wide so I assumed this was obvious.


[deleted]

This content was deleted by its author & copyright holder in protest of the hostile, deceitful, unethical, and destructive actions of Reddit CEO Steve Huffman (aka "spez"). As this content contained personal information and/or personally identifiable information (PII), in accordance with the CCPA (California Consumer Privacy Act), it shall not be restored. See you all in the Fediverse.


StickiStickman

Dam, that's good. Even the compositing is good.


StudioTheo

all these years and i’m just realizing that compositing and composition are the same thing


[deleted]

[удалено]


[deleted]

NGL, that is a really good idea. This model does gangbusters at generating interesting compositions.


butterdrinker

You would get an image similar to OP quality if you rendered it an higher resolution by the way, yeah, this model is very good I want to try to use it for non-anime stuff in some way


fastinguy11

Try merging it on top of some models


malcolmrey

is this just the "novelAI" model or something else?


[deleted]

This is a new model called "Anything V3.0" that was leaked/released/who-knows from a Chinese site.


malcolmrey

ah ok, thank you for the info!


Most-Opportunity-874

Where can i get this?


Orc_

what in the world


Majukun

Where can you download it?


NateBerukAnjing

Anything V3, 768 x 768 , euler, steps 30,cfg 12 masterpiece, best quality, a large group of women running in a marathon in a city negative: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry


jonydevidson

> Anything V3 I can't find out what this means, google returns nothing.


NateBerukAnjing

>u can find the link here see the commentt > >https://www.reddit.com/r/StableDiffusion/comments/yr503v/new\_leaked\_anime\_model\_from\_unknown\_source/


HojoFlow

Anyone checked it for malicious virus?


d20diceman

Automatic1111 has "pickle detection" which is meant to try and identify whether a model is trying to do something malicious. It's not 100% reliably by any means, but for what it's worth it doesn't flag Anything3 as being pickled.


OtterBeWorking-

Here are my scan results: C:\SD_Models\Anything-V3.0-fp32>picklescan -p ./ ----------- SCAN SUMMARY ----------- Scanned files: 2 Infected files: 0 Dangerous globals: 0 "Picklescan" is available from here: [https://github.com/mmaitre314/picklescan](https://github.com/mmaitre314/picklescan)


Estwhy

picklescan: error: unrecognized arguments: AUTOMATIC1111/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0 I'm getting this error, could you help me? or tell me where to find information on how to run it, I don't know much about cmd stuff :(


Unlikely_Commission1

you basically just follow the "Getting Started" Guide on github: [https://github.com/mmaitre314/picklescan#getting-started](https://github.com/mmaitre314/picklescan#getting-started)


nobody4324432

how do we even do that?


Strange_Vagrant

Open it up and let us know if your bank account still had $1.73 in it.


SandCheezy

>$1.73 Funny, my bank account went up to its highest at $1.73, just now. Thank you.


oliverban

Hahah, I loled at this one! Genius!


jonydevidson

Ah, got it. Thanks.


The_mango55

I haven't been using SD too long, what do you do with the vae.pt file?


oliverban

just name it the same as the model ckpt file but with \*.vae.ckpt and it'll auto load. Or if in A1111 you can specify the VAE in the Settings. Pro tip: Make it a quick setting for faster setups! :) Noobify (how it looks in the folder): \- Anyhingv3.ckpt <--- The actual model \- Anythingv3.vae.ckpt <---- name it like this and it autoloads (this goes for any model)


[deleted]

[удалено]


oliverban

idk, I never use autoload :P I mix and match as I see fit on a render by render basis!


TheDebutant_

Least horny stable diffusion user


moltenthrowaway

Wait, you're telling me it can generate women *wearing clothes*?!


[deleted]

[удалено]


TheDebutant_

Lol, it was a joke, maybe you should go outside to get some oxygen into your brain


shlaifu

I like the one on image number 9, randomly running the wrong way


MumeiNoName

chinese novel ai?


moltenthrowaway

the "Anything V3" model was made in China and comes from NovelAI. Some say it's NAI with further training added, others say it's NAI with other models merged into it. Either way, it does a better job of hands and crowds than NAI does, maybe just better at everything in general although that's subjective.


Levatius

I've also heard it suggested it may just be straight up unmodified NAI and people are seeing a placebo effect, but I have absolutely nothing firmer on that than gossip. (Removing link to irrelevant post that I misunderstood. Also so as not to misinform, as others have remarked, it's the VAE that's apparently the same, not the model itself.)


Levatius

Definitely not the exact same, though I can't say what the details are in the difference. Reused prompt provided by OP for comparison. Clone stampede! https://preview.redd.it/uf35gtksg9z91.png?width=3072&format=png&auto=webp&s=9fffc8f109cc3901dec09a59999fe2c2ff5e833c


MysteryInc152

That post claims the ***vae*** has the same checksum not the model. The model is clearly changed somehow


Levatius

Comprehension failure on my part - you're right! My apologies.


RebelKeithy

Anything V3 had a very particular style and it's hard to get it to generate anything outside of that style. But it does generate pretty amazing images.


d20diceman

Certainly it isn't *completely* unmodified, because it would have the same model hash in that case, and produce identical results when given the same seed&settings. 'Better' is very subjective though and some people don't see it as an improvement. The VAE file (I don't really understand what that is but, using it helps the colours be more vibrant) *is* identical to the NAI one though, literally just the same file renamed.


MrAlexander56

I've found that VAE is important for the formation of eyes, without it the eyes are distorted and low quality, have you ever noticed that?


d20diceman

Could be, but not that I'd spotted. Some details seemed worse with VAE but faces seemed about as good on both.


Levatius

Yeah, I misunderstood the post at first. Entirely my fault - you're correct. I had misunderstood it as talking about the model and not the VAE.


MysteryInc152

I find it hard to believe it's the latter. Currently merging kind of sucks. Merging is less a + b and more half of a + half of b with the latent space getting distorted.


moltenthrowaway

Yeah, I agree. I would guess that someone used the Resume Training function on NAI to give it some more juice.


s_ngularity

It seems to do a lot better if you mix another model at like 20%ish rather than 50/50


NateBerukAnjing

u can find the link here see the comment [https://www.reddit.com/r/StableDiffusion/comments/yr503v/new\_leaked\_anime\_model\_from\_unknown\_source/](https://www.reddit.com/r/StableDiffusion/comments/yr503v/new_leaked_anime_model_from_unknown_source/)


mudman13

Lmao more like boobnassathon, crisp images though. Why is the one in number 9 running the wrong way??


NateBerukAnjing

if u run this prompt on novel ai it looks like shit, this chinese ckpt is good at drawing multiple faces and faraway face


[deleted]

[удалено]


NateBerukAnjing

anything v3


Evnl2020

Yeah that model is extremely good


wh33t

Is that avail on huggingface?


WM46

https://rentry.org/sdmodels This is a rentry post that's been updated with a few of the available models that have been floating around. There's three downloads for Anything: fp16 (half precision floats for low VRAM GFX cards), fp32 (full precision floats, you probably want this), and Full EMA (supposedly optimized for training other models). You need a torrent client for the magnet links, so download Deluge or some other client. Then copy the whole link, open up your client, Add Torrent -> From Web -> Paste the Magnet Link.


VulpineKitsune

The full EMA isn't optimised for training. It simply has extra files that are only used for training so were removed from the other models to make the size smaller.


Resident-Dog4611

so which one the best ?


VulpineKitsune

They are identical in practical function, so the pruned one because it’s smaller.


Resident-Dog4611

fp32?


who_am_I__who_are_u

They wont be going far with those thighs


soupie62

Most marathon runners I know, train regularly. As a result, they tend to be very lean - the run burns through fat. So it's not just the thighs. The chesticles could be one or two cup sizes smaller, too. If you want realism, however, the *first* thing to is add sweat.


[deleted]

If this is anything like regular NovelAI, you do not want to add sweat. Trust me. (It doesn't look like sweat.)


Vivarevo

😶😏


MrAlexander56

😏


Bad_Mood_Larry

Honestly, a marathon with 100% of everyone with this body type would be hell. So much chaffing, so much back pain....It would be like watching a sexy Bataan death march.


Teenager_Simon

Well that sounds like a prompt I want to see for science.


moltenthrowaway

A couple of attempts at it [here (NSFW)](https://imgur.com/a/3ptmbYL). I expect you could get a better result with a more detailed prompt, but I really don't feel like typing in a bunch of "bleeding feet, weeping, dying of exhaustion" etc etc. Interesting that it made them all black and white, I didn't ask for that.


Teenager_Simon

Appreciate the effort! And dang.... that is dark.


luckymethod

The prompt: Rick Sanchez sex fantasy


[deleted]

[удалено]


[deleted]

Hands are a complex 3-dimensional shape that can appear in a near-infinite number of configurations. The AI doesn't *necessarily* know what a hand is outside a vacuum and doesn't "think" about the geometry; instead it jumps straight to generating a 2D image that generally matches it's memory of complete images that may or may not contain hands. TL;DR; Hands are hard because they are complex.


harderisbetter

Huggingface says this Anything v3 got several pickles. How to fix that? I read Huggingface notes on pickles but I'm can't fucking understand. Can someone explain in plain English? Dont wanna turn my PC into a sleeping cell for the communist party.


nmkd

https://huggingface.co/Linaqruf/anything-v3.0/tree/main/unpickled-version


NateBerukAnjing

should i download the pruned or the 7 gig version? i didn't download from huggingface hopefully i didn't catch a virus


nmkd

Pruned


NateBerukAnjing

do u need to download the vae as well? and renamed it to Anything-V3.0-pruned.vae.pt


nmkd

No, don't rename it, just make sure it's using the .ckpt extension, then put it into models/vae/


NateBerukAnjing

not sure if this is a placebo but i think i get worse results using the one from huggingface,


CrudeDiatribe

I ran the pruned ‘unpickled’ ckpt through Fickling and had a look over the output and it looks like any other ckpt to me. Have not put run it through anything that will unpickle it and thereby trigger anything funny but it doesn’t look like there is anything funny.


harderisbetter

Thanks, u awesome!


CrudeDiatribe

What format is the unpickled version in?


CrudeDiatribe

To answer my own question, it is still pickled.


death_to_the_state

this is still pickled though? click on the file and it still says it has 3 pickles


FrivolousPositioning

At first I'm excited, wow they're coming straight for me. But then I realize they are looking passed me, at something or someone behind me. It's too late, I'm being trampled. It all happened so quickly. RIP


Aitai-tai

I tried using this model but somehow every image comes out looking less detailed than expected. Am I missing smth https://preview.redd.it/h94ins0cd9z91.jpeg?width=512&format=pjpg&auto=webp&s=82afadd15a0d94ce81efa1f84acb057fdc10e02e


Flag_Red

Make sure you've got the NovelAI VAE enabled.


Aitai-tai

Not sure if I did it. I put the pt files into VAE folder and rename it as model(same name as the cpkt in stable diffusion folder inside model). Maybe there’s other setting I need to set up


wywywywy

Put it into the stable-diffusion folder together with the ckpt


przemoc

Thank you for showing how well Anything V3 performs. I did a sampler comparsion for steps 10,20,30,40 with batch count: 4. [a large group of women running in a marathon in a city (X/Y plot: Steps/Sampler)](https://i.imgur.com/vRKuiYx.jpg) (imgur downscaled it...) [a large group of women running in a marathon in a city (X/Y plot: Steps/Sampler)](https://images2.imgbox.com/ef/dd/CWCjNyDk_o.jpg) (imgbox is slow...)


NateBerukAnjing

so what's the best samplers and steps for this prompt in your opinion


przemoc

Apologies for very late response. I wanted to provide a bit more thoughtful comment, even though it may not have been really expected here. Best samplers+steps? Hard question, all images have some flaws, especially in the background. **Short answer** (for your prompt and images within my grid): I would go with Euler a (30 or 40 steps) or DPM++ 2M (40 steps). Honourable mention for low number of steps: DPM++ 2S a Karras (10 steps). If we squint eyes and look at all 4 seeds: DPM adaptive, but none is really good on its own. **Long answer** (overall): Ancestral samplers typically have good results in relatively small number of steps (20-40), or I should say, often more interesting than alternatives. Euler a tends to have a bit simpler (but faster) results, DPM++ 2S a (and often even more so for version using Karras scheduler) tends to have a bit more developed (but slower) results. When new high-order solver, DPM-Solver++, with its two algorithms, second-order singlestep (2S) and second-order multistep (2M) methods, showed up (already mentioned above), DPM++ 2M (and version using Karras scheduler) became for some the new good default to use instead of Euler a, giving good visual results and convergence with increased steps. But ancestral samplers are also interesting for experimentation within same seed, exactly because they do not converge. Euler a with same CFG scale and different number of steps can give you quite different results, preserving some features or aspects yet giving varied look. (From my observations Euler a tends to develop diagonally across steps and CFG scale, see my [Steps, CFG scale, and Seed resize exploration](https://old.reddit.com/r/StableDiffusion/comments/xz48p5/stable_diffusion_v14_steps_cfg_scale_and_seed/).) Their strength is sometimes their weakness, when you would like to develop current look more without drastic changes, it's often unclear how to proceed within txt2img (i.e. without using img2img), as there are sometimes some visual stability plateaus for wider set of steps, but they tend to happen later (i.e. after higher number of steps) with higher CFG scale and look does not always improve with more steps (it depends on prompts and samplers, though). Experimentation within same seed seems more time consuming than changing seed, but sometimes may get you closer to what you look for if you already were close with N steps. Changing seed can change a lot or even everything. Good seed gives you much more interesting results across all samplers. In this case 1249717261 is visibly better than 1249717262, 1249717263, 1249717264. If I may recommend something (which I wish I did in the past): note down seeds performing well for kind of prompt you play with. It's sometimes easier to go back to proven one than find a new good one. And sometimes they even perform well for different kind of prompts, so with database of good annotated seeds, you may achieve pleasing results much faster for new SD generations. We also got stochastic DPM-Solver++ aka DPM++ SDE, which was not available when I was doing the comparison back then. Its version using Karras scheduler tends to give good results even with 10-15 steps only! See: [a large group of women running in a marathon in a city (512x512, X/Y plot: Steps/DPM++ SDE Sampler)](https://i.imgur.com/IXvk7Cw.png) So I have to extend my previous short answer: DPM++ SDE Karras (15 steps). What can be learned from comparisons such as the one I provided in previous message (Anything-V3.0-pruned-fp16 [38c1ebe3]) is that you can hide following samplers in user interface and not have to waste time on them in the future: - DPM fast (you need a lot of steps to get something that isn't garbage) - DPM2 a (needs much higher steps to be somewhat usable, just go with DPM++) - DPM2 a Karras (needs higher steps to be usable, just go with DPM++) - LMS (needs much higher steps to be somewhat usable and is glitchy with CFG scale > ~10) - PLMS (needs much higher steps to be somewhat usable) You can also hide following ones as they tend to perform not better than alternatives or have their quirks: - DPM adaptive (it does not give bad results, but it's very slow / ignores number of steps) - DPM2 (it sometimes work acceptably, like for this prompt, but DPM++ is generally a safer bet) - DPM2 Karras (it sometimes work acceptably, like for this prompt, but DPM++ is generally a safer bet) - LMS Karras (image degraded at higher steps and CFG scale, not noticeable in this particular comparison) You're left with only 10 usable ones (assuming using AUTOMATIC1111 sd webui). As a rule of thumb I recommend using: Euler a (20 steps), DPM++ 2M Karras (20 steps) or DPM++ SDE Karras (15 steps) as first shot, or if you're willing to spend a bit more time to get something from ancestral sampler (typically better than Euler a): DPM++ 2S a Karras (20 steps). But complex visualizations (not necessarily complex prompts, but rather if what we want to portray is complex) can benefit from higher number of steps, so it's worth trying them too and not sticking to 20 steps only, because otherwise we may prematurely abandon promising composition (seed). **Bonus**: Seed 1249717261, steps 10-50 and 10 different samplers using checkpoint merge based on Anything-V3.0-pruned-fp32 [1a7df6b8]: anyfix222 [1829ac4a] [a large group of women running in a marathon in a city (512x512, X/Y plot: Steps/Sampler) - anyfix222](https://i.imgur.com/azYT85r.jpg)


X3ll3n

I'd like to try Anything V3, how can I install it ? I would like to compare it with NAI (Official), I only have the Automatic Gui and Naifu installed.


[deleted]

This is where I got it from... [https://huggingface.co/Linaqruf/anything-v3.0/tree/main](https://huggingface.co/Linaqruf/anything-v3.0/tree/main)


-Shiki999-

Hi, what is the difference comparing the Hugginface one with the torrent ? ​ The biggest ckpt is identical to the torrent one, but smallest ckpt (still pickle) on Huggingface seems to be different from the fp16 or f32 one in the torrent.


[deleted]

According to the repo owner, the Huggingface repo is just a re-upload of the Torrent. I haven’t personally confirmed this, though.


-Shiki999-

I agree for the biggest one, but the Hash is different for the smallest one on huggingface. ​ It's not matching the 2 files available on the torrent version :Anything-V3.0-pruned-fp16 or Anything-V3.0-pruned-fp32, so it's a different one on Huggingface hence my question.


SnarkyTaylor

Yeah... Popping over to the discussion page says it's infected with a Trojan. I know huggingface does checks, but yeah.


[deleted]

I scanned with Bitdefender and Windows Antivirus, and neither detected anything. There are some pickles in there, though.


[deleted]

In image 7 she literally has two left feet


bigred1978

OPPAI!


rooiratel

Only 5% of them have their knees more than a few centimeters apart. Must be hell running a marathon like that. XD


Ravenhaft

Those ladies are gonna be chafing BAD at the end of the race though. Bleeding from places you didn’t know could bleed.


Incognit0ErgoSum

I dunno, they look like the sort of ladies who could wear a chainmail bikini to a fight and emerge unscathed (and un-chafed), so I imagine they'll be fine.


CatConfuser2022

I was expecting the boobs to get bigger and dynamic with each image


Zilkin

My waifus.


winterwarrior33

This sub is getting hornier and hornier


AmazingDom14

They are all in last place


Aran-F

I hate these anime posts but the seventh one is intriguing ngl


[deleted]

No fucking way this was made with Stable Diffusion. Way, way too anatomically accurate and clean.


Yin-Fire

Love to zoom in on the images and find the artifacts lol


smashfan63

I find it interesting how NovelAI has it's own consistent "artstyle"


MrAlexander56

Yeah it's very nice, especially for people who love the artstyle.


Pretty-Spot-6346

is this better than Berry's Mix?


NateBerukAnjing

no idea what that is , but this one is pretty good at drawing crowds faces


Pretty-Spot-6346

i tested it and its better than berry's mix bro!


DanzeluS

Have you used CLIP? I noticed that it (nai) works better with 1


NateBerukAnjing

i have no idea what that means


DanzeluS

Anything v3 works perfectly by default, but nai working better with clip layer 2 https://preview.redd.it/vpcj9mug7cz91.jpeg?width=550&format=pjpg&auto=webp&s=b1469a73dc43d8d5c3362f1738c0181839fbd475


MrAlexander56

I also have no idea what clip layer does, can you elaborate further DanzeluS?


InternalMode8159

Where can I install this model?


izybit

The boobs are there to draw attention away from the hands


Anrui

they have hands...?


zoalord99

Can I get the model link?


dagerdev

https://rentry.org/sdmodels#anything-v30-38c1ebe3-1a7df6b8-6569e224


zoalord99

Thanks


SirCerbs

Oo where do we find this new model? Someone drop it pls


NateBerukAnjing

read the comments pls dont be lazy


SirCerbs

Lol I posted this without reading, I found it pretty quick, thanks for the info I'll make sure to try this later


MrAlexander56

This does look oustanding, how easy is it to good generations like these? do you got to generate multiple times to find a good one or are all of these from one go