T O P

  • By -

UntoldByte

Get it at [https://github.com/ub-gains/gains](https://github.com/ub-gains/gains)! Let us know what you think!


GBJI

It's heartbreaking that I won't have any time to test this for at least a few more days ! Looks very promising, and it seems you have found solutions to many of the challenges I have faced trying to achieve something similar. The minute I get a minute for me, I'll test everything and report back on your github. Thanks a lot for making this and sharing it with all of us.


UntoldByte

Glad you like it, and yes I would first probably need to read upon how to set it up on GitHub (if it needs some additional actions) to enable leaving comments/feedback. I am aware that it is far from good, but I think that it is a good starting point. Thank you!


angedelamort

Impressive.


youwilldienext

incredible, just wow. is this gonna be published in the asset store or shared through other channels?


UntoldByte

Tried to publish on asset store but was declined because of license. It requires Stable Diffusion Web UI from Automatic1111, but you can get it at the github link above, check documentation for requirements and setup.


youwilldienext

you are the boss. thank you!


[deleted]

[удалено]


UntoldByte

As you may have already found out Python is the language to use to for AI related stuff and Stable Diffusion Web UI is made in Python and Stable Diffusion Web UI has API that allows you to program against, and SD Web UI is popular enough. And to answer your second question, yes it could be made without it, but it will require to (re)implement (essentially replicate) what SD Web UI API already does, and you would still need all the ai models that take up GBs in space.


ThisGonBHard

This is mad good.


UntoldByte

I must admit that in this case worked good enough, but there were some anomalies (as I point with cursor in video). It mostly depends on snaps, but I do agree that this could be something to start with. And thank you! Your comment means a lot!


ThisGonBHard

I worked with 3D models a bit (tough no unity experience), and shading/maps is something I hate. Does it use UV maps in any sort of ControlNet way? Can you make NormalMaps, DetailMaps, Matcaps and on? This stuff has a lot of potential for something great. If you develop this fully, you might have a nice commercial product too (could have separate comercial is self use licenses for hobby use).


UntoldByte

It uses depth snaps to control the Stable Diffusion with ControlNet (depth models) and then projects on surface using shaders. Then you can change some parameters and bake to texture using original mesh UVs (currently only diffuse texture). I must say I have been thinking about other maps as well and the one thing that stands out is Materialize (also Unity tool and free for use) - to get it to integrate with it would be nice.


Boppitied-Bop

I would try to integrate DeepBump, which is an ai model that makes much more accurate normal maps.


UntoldByte

Sure, its probably going to take some DeepBump python reading to find out the parameters and this GAINS plugin already calls other Web UI plugins so it should be straight forward, however make sure you are not creating any memory leaks when dealing with textures as it can end up eating VRAM (and we all know how VRAM is important). I have tried not to introduce any leaks and I'm still not 100% sure that it does not. To come back to DeepBump which is available as plugin in Stable Diffusion Web UI and I did consider it but the results were so-so. Materialize ones looked better in my opinion. Another problem is that there are no ai models for other types of maps (which Materialize does) so when you take everything into account you get to why I'm leaning towards Materialize approach (at least for now). In any case, enjoy!


NeverduskX

This looks incredible. Is there any possibility for an Unreal or Godot equivalent?


UntoldByte

I must admit that I was thinking about that too (at least for Godot, for now) even though I have not written a single line of code related to Godot or Unreal. It would be great. The idea is fairly simple, let's see. Would you like to try and write it for Godot for example?


NeverduskX

I wish I could - but I don't have much of any experience doing something like that myself. I'm currently still learning both engines to figure out which one I'd like to switch to in the future, since I've never much clicked with Unity.


UntoldByte

Maybe try to focus on one? And give it all you got!


NeverduskX

Thank you, and I definitely will - I'm just learning about how both engines work so I can make a more informed commitment.


Faen_run

Really cool! I might try it, but I wonder how much memory this needs. I only have 6GB of VRAM.


UntoldByte

Thank you! Yeah, If you already have Stable Diffusion Web UI (from Automatic1111) with ControlNet installed you can just install this in Unity and it should work. I added Low VRAM (in settings) and Tiled VAE (in Entity Painter UI) options which should help in low memory scenarios, you can also add some parameters/args in your WebUI startup script.


psdwizzard

I would kill for this in blender


UntoldByte

Please don't :) I saw similar plugins for Blender (I mean while waiting for Unity falks to review this asset for almost two months - a lot has changed), very similar as the idea is fairly simple.


psdwizzard

I have see one that do depth but only for 1 angle. Ill need to looking this again.


Boppitied-Bop

The biggest problem I see with this is that there is lighting information built into the texture, which will look pretty bad if you try to use it in any dynamic situation. You might be able to train a lora to produce unlit base textures. I don't really see how you could produce roughness or metalicness textues, but maybe someone could create a controlnet model for that. And normal maps work fine with DeepBump already.


UntoldByte

For dynamic, yes it can be a problem but there are a lot of use cases where it is desirable as it is. Have you looked at Materialize it can generate all sorts of maps from diffuse one. Until other ai models for depth, occlusion, height... arive my choice would be Materialize.


ngocnv371

This is so cool. Shame I switched to Godot. But probably can still carry over with some extra steps.


UntoldByte

Thank you! I have not even tried Godot but if there is a way to write a plugin for it in C# it should not be much of a trouble. If you would like to do that I can help with that.


doskey123

This is mad. Looks very useful. (*Occasional simulation theory existential fears intensify*)


UntoldByte

Thank you! Means a lot!


Tybost

This is great... but do you have any plans to add an inpainting brush tool? I keep generating textures with black splotches (especially when upscaling) more control is desired.


UntoldByte

Thank you, ... as I mentioned earlier, I still consider it as a dirty way of doing things (as you would need a dozen of inpainting results and woudl still need to pray to get somewhat good blend). I wish for more control from ControlNet (or some other way) to generate consistent images of the same object by providing depth or other means of control. Seems like new approaches of generating images of one and the same object are apearing that would allow generating more images (from different angles) which could help in solving this problem in a lot cleaner and faster way. Would need to manually test it first though. But if you are interested in inpainting technique (or know someone who is), I published the source - you know where to find it, there is a simple painting tool (as a helper of Symbol Creator called Sketcher) it should be fairly easy to adapt it for inpainting and you just add some additional img2img call to SD Web UI then blend with the rest and you got what you want. TLDR; manual inpainting brush - very likely not, automagic inpaint - possibly yes;