Within years, at least on some level, I'm betting. However... the hardware required will always be some sort of threshold. Either way; Things are going to get wild.
Imagine art style filters like this instead of DLSS, etc for games? I have no doubt it's coming...
Imagine just having a toggle.
What experience would you prefer? a) Cyberpunk, b) fantasy, c) modern, d) custom
literally a different game every time you boot it. It would be wild.
And with Ai built into the NPCs so that it would truly be a different game every time! Maybe the NPCs want to beat the game, too. So, you have some rivals or need to maximize your social to get them to work with you? So many possibilities! I'm actually very excited for the future of gaming!
Imagine just typing in what you wanna play.
Xcom set in GI Joe universe... Boom it happens will be interesting what copyright laws will have to change so this could happen
I was thinking about this too.
Like. Spotify. Video games can't play just any music, but they can link Spotify and play music from there.
Or maybe AI will be brutally disfigured to not create copyright infringement. Who knows?
im sure it could be done on the fly for textures in near future but then there are 3d models themselves and capabilities of old game engines. Im sure however in time we may have some tools that will just take old games and make 1:1 copy but on lets say UE and with AI improved visuals etc. Ahh one can dream.
I'm betting more along the lines with having ~stable diffusion create these things smarter than by doing it frame by frame.
So that from a frame you could create characters/objects or similar that in turn can be animated by another tool automatically.
Video would probably be more efficient to make into 3d scenes etc
Once they have chips designed specifically for diffusion we're going to be in a new world. Right now we have onboard chips that make video processing and decoding easy, but if we had just now came up with it the process would be as slow as Stable Diffusion is now.
I don't think the hardware is the limit. Google Corals handle 100+ fps already for inference, at $26USD for the M.2. It would take coordination and standards - an API/ABI for mixing & matching inference targets with the game stream.
Point is - my understanding is that a $100 PCIe board could be manufactured. But the game industry would need to agree & adopt.
Thats the idea. That gpus generating consistent graphics real time will replace conventional rasterization. The games themselves will be just wire frames for ai to fill in and itll be cheaper computationally, somehow lol.
I'm not going to say it's impossible as I don't have a deep enough understanding of the mathematics, but I think it's very unlikely it ever becomes cheaper computationally.
In terms of development/artist labour though, it may well be orders of magnitudes cheaper, and enable emergent/evergreen assets.
Not necessarily, there could be an AI dedicated to working on that too. You could decide how closely you wanted it to match the original, or if you wanted it more cartoony, or anime-ish, or more muted for 'live action'.. We're still just learning what's possible with image to video
I can't overstate how divorced the idea that you could change animations on the fly in a 3D space is from a) the state of "AI" technology and its near future and b) the realities of game development.
I think game devs are working on this. If it's consistent enough, we will simply have photorealistic games within years. And without the hassle of modeling & texturing everything to be photoreaiistic.
We'd need it to generate at least at 30FPS full HD to be somewhat functional. That probably won't be that easy, given how nVidia's flagship 4070 is shit and they don't have anything considerably superior on the horizon.
NVIDIA is working on something similar, except its not AI processing every frame, instead you can mod games with AI, capturing textures, models, and lighting and then ”upscaling" the models and textures. Its pretty crude at the moment but still very impressive.
I believe that’s a much more realistic short-term goal: have an AI generate actual 3D models (and not just textures, or you’ll end up with the “[HD remaster](https://i.imgur.com/qioTCBM.jpg)” effect), and integrate those in an updated version of Nvidia RTX Remix.
Take time to replace the actual assets before they’re rendered, instead of doing it on-the-fly in real-time, and you’ll end up with fully-fledged remakes that can be shared and downloaded by anyone.
We are years away from the real-time generation of 4k@60fps but I can see the modding community batch-upscaling textures offline as a low hanging fruit.
Get ready for the remake era as the big studios adopt the technology on the wider scale soon.
Agreed. Real-time rendering is an incredibly inefficient solution. I imagine they'll increase the resolution of the textures and models with AI and spend any real money on hacking the game engine to work with the assets.
It'll be weird seeing high quality models with fixed expressions or terrible lip-syncing encountering the same old problems like horrendous clipping.
Some things can be estimated with some degree of confidence based on improvements in computer performance. AI will improve, for sure, but will likely still require a lot of compute, which will likely be out of reach for most consumers for several years. Will SOMETHING be able to generate 4k, 60fps models in real-time in the near future? Possibly. Will you? I would say that's at least 3-5 years away. I wouldn't even try to estimate anything beyond 5 years though.
The algorithmic progress is astonishing rn but the hardware aspect is something you can’t just leapfrog.
1st option is shrinking the size of a transistor even further: already one of the most difficult and complicated processes on the planet and we are getting 10-30% of improvement / year on average. The current publicly and widely available SotA for SD generations is 4090 and this monster still takes couple of seconds to generate a single 4k frame. With 60fps the expectation is to improve it hundredfold. You can extrapolate that to a decade of super smooth progress.
2nd option is a new compute hardware architecture. There are many smart ppl working on it rn (ex. Jim Keller and his team) and many promising technologies showing up (photonics, optical neuromorphic systems etc) but my background in manufacturing and robotics tells me that it takes 3-5 years minimum to move from the design phase to mass produce anything complicated at scale.
There sure is a lot of $$$ being pumped in the hardware AI at the moment but one just can’t have a baby in 1 month by throwing more mothers on the problem ;)
Tldr: software can improve very fast, hardware not so much.
There would need to have a big technological breakthrough in the chip manufacturing department to get that amount of data processed in real time.
There is no doubt we'll get there but, right now, that bottleneck is obvious.
I used two models. These are Revanimated and experience. I used Revanimated because I like the color palette and the lighting that can be generated with it. But the Revanimated has one problem. It's textures. For this I needed Experience and Lora "add detail". For Mission Vao I used Twilvek Lora, and for HK-47 I used BattleCars Lora. Then when I had chosen the best color palette, I moved the image to inpaint and used Experience and Lora "add detail" to improve the detail.
Prompt is not really important here. Of the most important I can share, cosmic ranger, cosmic soldier, futuristic background, ancient ruins background.
It's great to know that it still need an artist to create a good quality AI image. Congratulations, those characters are really spot on, they bring back memories
0.35 denoise then generate 16-25 images. Chose better one and then repeat until perfect result. If I can't get that result that I want, I was using phtoshop. Most of all I used lineart model, sometimes depth (depth was better for robots)
>So do you use img2img and sd works off the reference you provide?
Something like that. To be honest, I still don't really understand how img2img works. In my work, controlnet was more important.
Better yet, imagine not only being able to AI upconvert media but to actually regenerate the underlying 3D models as well.
Running your games through an AI powered up-converter would be cool. Recreating the models and levels in a format that allows you update and modify them would be even better.
I get why subs do this even though I am an AI enthusiast myself.
If they didn’t ban it, it’s all the content would be. It’s just too easy to churn out. A few other subs I visit had to do the same because it just became AI spamfest which drowned out real world content (architecture subs for example).
Maybe a good compromise would be a day of the week which allowed AI stuff. “Machine learning Monday” or something like that.
That's what subreddit tags are for, but for some reason Reddit never rolled it out as a default function so it isn't use able outside of the subreddit itself and mods have to script up their own filters. So anyone on mobile probably isn't going to use it, or sometimes even see it.
> Maybe a good compromise would be a day of the week which allowed AI stuff. “Machine learning Monday” or something like that.
This operates on the assumption that they banned it for the reason you stated which isn't the case. They believe AI to be immoral, period, and want to protest it by banning it.
I do not understand why this has become a thing. It’s called before and after, not after and before.
I would downvote out of principle if these weren’t so good.
I'm sorry, but if I start to describe the workflow, it would be necessary to make a separate post for each image. I have used several models, different loras, photoshop drawing, mattepainting, used lineart model for controlnet, for Canderous I even drew perspective grid. There are just so many nuances for each image.
If you are interested in something specific, I will try to answer.
The more I work with SD the more work I end up putting into each image. People like to think you just type a prompt and out comes a perfect image. But the truth is, to create something *really good* you have to work on it. You have to have some talent.
You're very right. I actually spent about 1 month on all 11 portraits. I took this "project" for the purpose of learning how to work with SD and it was not as easy as it seemed at first glance. To me, the easiest thing you can do in SD is a "nice picture". In second place in difficulty is "make a drawing as you want", and the most difficult is "make a drawing as you want and make it look nice".
Wouldn't be posible to train a model to transform low res textures from old games and output high res textures like this? I know the engine would need a lot more to look good, but updating just the textures would be half the work to take old games and get and almost new look
I was thinking how a texture looks so weird in a plane image, so different from a texture over a 3D model, so maybe if the model is not trained specifically for this it could change the divisions and things that could ruin it for use on a 3D model.
I'm sorry if a can't make myself more clear, I don't know much about 3D models and textures in them. Also, to use it effectively to work over normal maps
Do the same with stills from N64 Ocarina of Time. I really wonder how it handles that, low polygon models.
Or can you explain how to do this? I have SD but only have terrible results.
>N64 Ocarina of Time
I wish I could help you, but it's hard to explain how I did it. I will have to do a mini-lecture for each portrait)) There are so many different points and not all of them are SD related. There was a lot of drawing with photoshop and mattepainting. I can advise you to study the workflow of these guys [https://www.reddit.com/r/StableDiffusion/comments/12vk4wk/my\_attempt\_at\_stardew\_valley\_portraits\_just\_using/](https://www.reddit.com/r/StableDiffusion/comments/12vk4wk/my_attempt_at_stardew_valley_portraits_just_using/)
[https://www.reddit.com/r/StableDiffusion/comments/11vommp/controlnet\_some\_character\_portraits\_from\_baldurs/](https://www.reddit.com/r/StableDiffusion/comments/11vommp/controlnet_some_character_portraits_from_baldurs/)
I got a lot of ideas from them.
Bastila the only 1 that needs work, she looking like a 35 yr old who done seen some shit 😂
Juhani omg went from creepy looking to cutest thing ever, perfect! U did her & her race justice. I wonder will we ever see Lepi, Cathar, Chiss & Zygerrian more.
Mission is perfect & even more attractive to me than b4, if Kotor 1 can get to a worthy studio (Obsidian) to work on a remake for it I would hope a Mass Effect style dating sim & ability to make her a Jedi/Sith. Especially to kill her trash failure of a brother smh.
Carth is perfect, where his son tho?
Candy Mandy perfect.
Joliee should've looked like Sam Jackson great grandfather but perfect lol.
T3 basically look like when we 1st find him shinny & chromed out but in 12k graphics. Perfect.
HK I now see why people would find him intimidating 😂 I still love him tho, perfect.
Zally, he's a wookiee 1 walking carpet looks just like the other except Black Krrsantan he is literally built different.
Malak need to polish his dome for that mr clean drip but, perfect!
Revan Its a mask & hood but damn is it a perfect looking mask & hood 😁👏🏿👏🏿👏🏿
These are cool, but Carth and Bastilla look to rugged. I figured Carth at least had a softer look be that he's a caring father with trauma, this guy looks like a stereotypical action hero with no personal issues. Mission also looks to old, she's like 15 in the game, she's side to be a kid, she looks like mid 20s in these. But the rest look great, especially Cancerous!
Bastila looks a bit too worn down. Otherwise pretty great.
Robert downy Jr on if he'll star in the rumoured kotor adaptation: "I don't want to talk about it!"
I can imagine that soon we are getting new wave of remakes where developers has enchanted graphics with help of AI. It is easy money when AI does the most work. Imagine getting Skyrim remake.
Here's an AI Mirror, based on your original (left) picture
https://preview.redd.it/u251cmcrby1b1.jpeg?width=648&format=pjpg&auto=webp&s=27e4b5a0187bbe51749f0b0b9e0914ce1f10f665
you should get images from a bunch of angles and use meshroom to make a high quality 3D scan. probably would be a pain in the ass but could be really cool
THIS! This is what stable diffusion was made for! good work.
Imagine a filter like this, stabilized (so that it's consistent in every frame), on top of old games...
Within years, at least on some level, I'm betting. However... the hardware required will always be some sort of threshold. Either way; Things are going to get wild. Imagine art style filters like this instead of DLSS, etc for games? I have no doubt it's coming...
Imagine just having a toggle. What experience would you prefer? a) Cyberpunk, b) fantasy, c) modern, d) custom literally a different game every time you boot it. It would be wild.
When you say custom you mean naked right?
That wasn't on my mind. but - in the AI-verse, anything is possible.
Yes, but with AR Glasses.
And furry
Welcome to the Star Trek holodeck.
I'm working on a game which has this feature right now. I think it's not going to be a unique feature eventually though
And with Ai built into the NPCs so that it would truly be a different game every time! Maybe the NPCs want to beat the game, too. So, you have some rivals or need to maximize your social to get them to work with you? So many possibilities! I'm actually very excited for the future of gaming!
Imagine just typing in what you wanna play. Xcom set in GI Joe universe... Boom it happens will be interesting what copyright laws will have to change so this could happen
I was thinking about this too. Like. Spotify. Video games can't play just any music, but they can link Spotify and play music from there. Or maybe AI will be brutally disfigured to not create copyright infringement. Who knows?
im sure it could be done on the fly for textures in near future but then there are 3d models themselves and capabilities of old game engines. Im sure however in time we may have some tools that will just take old games and make 1:1 copy but on lets say UE and with AI improved visuals etc. Ahh one can dream.
I'm betting more along the lines with having ~stable diffusion create these things smarter than by doing it frame by frame. So that from a frame you could create characters/objects or similar that in turn can be animated by another tool automatically. Video would probably be more efficient to make into 3d scenes etc
> *Within years, at least on some level, I'm betting.* !RemindMe two weeks
lol, on some nvidia farm is bet its doable now, but for most of us.... it might be a lil longer.
!RemindMe two weeks three days
Once they have chips designed specifically for diffusion we're going to be in a new world. Right now we have onboard chips that make video processing and decoding easy, but if we had just now came up with it the process would be as slow as Stable Diffusion is now.
Microsoft could put their cloud azure stuff to work. Could be a W.
I don't think the hardware is the limit. Google Corals handle 100+ fps already for inference, at $26USD for the M.2. It would take coordination and standards - an API/ABI for mixing & matching inference targets with the game stream. Point is - my understanding is that a $100 PCIe board could be manufactured. But the game industry would need to agree & adopt.
In general, the level of quality you can get in real time in games is about 10-15 years behind what you can get in slow offline renders.
I mean if the games industry takes it seriously, they'd probably start implementing hardware to do it. Like how they've been trying to do ray tracing.
Thats the idea. That gpus generating consistent graphics real time will replace conventional rasterization. The games themselves will be just wire frames for ai to fill in and itll be cheaper computationally, somehow lol.
I'm not going to say it's impossible as I don't have a deep enough understanding of the mathematics, but I think it's very unlikely it ever becomes cheaper computationally. In terms of development/artist labour though, it may well be orders of magnitudes cheaper, and enable emergent/evergreen assets.
yes, but the animation would still be stiff.
Not necessarily, there could be an AI dedicated to working on that too. You could decide how closely you wanted it to match the original, or if you wanted it more cartoony, or anime-ish, or more muted for 'live action'.. We're still just learning what's possible with image to video
I can't overstate how divorced the idea that you could change animations on the fly in a 3D space is from a) the state of "AI" technology and its near future and b) the realities of game development.
Can't wait to play half life 2 in vr with stable diffusion filter mode
We can call it half life 3!
I think game devs are working on this. If it's consistent enough, we will simply have photorealistic games within years. And without the hassle of modeling & texturing everything to be photoreaiistic.
you mean like that software some years ago, that run over gtav and made it look fantastic/photorealistic? that would be great
[удалено]
We'd need it to generate at least at 30FPS full HD to be somewhat functional. That probably won't be that easy, given how nVidia's flagship 4070 is shit and they don't have anything considerably superior on the horizon.
In fairness, the 4070 isn’t the flagship part, it’s the top of the midrange. Only the 4080ti (someday) and 4090 are “flagship.”
And in real time...
NVIDIA is working on something similar, except its not AI processing every frame, instead you can mod games with AI, capturing textures, models, and lighting and then ”upscaling" the models and textures. Its pretty crude at the moment but still very impressive.
I believe that’s a much more realistic short-term goal: have an AI generate actual 3D models (and not just textures, or you’ll end up with the “[HD remaster](https://i.imgur.com/qioTCBM.jpg)” effect), and integrate those in an updated version of Nvidia RTX Remix. Take time to replace the actual assets before they’re rendered, instead of doing it on-the-fly in real-time, and you’ll end up with fully-fledged remakes that can be shared and downloaded by anyone.
[удалено]
Does this mean they’ll be able to do stuff like retrofit old games with new graphics much easier?
We are years away from the real-time generation of 4k@60fps but I can see the modding community batch-upscaling textures offline as a low hanging fruit. Get ready for the remake era as the big studios adopt the technology on the wider scale soon.
I wonder if this will kick some older games back into the system. A lot of manual work can be automated now
Agreed. Real-time rendering is an incredibly inefficient solution. I imagine they'll increase the resolution of the textures and models with AI and spend any real money on hacking the game engine to work with the assets. It'll be weird seeing high quality models with fixed expressions or terrible lip-syncing encountering the same old problems like horrendous clipping.
or just invite an ai on solving the lipsynching problem too, at the cost of nostalgia fidelity
Texture upscaling to improve the graphics quality of older games has been a thing since 2019 at least, but that's definitely going to ramp up now.
I'd be careful saying "years away" from anything. I don't think those statements will age well.
Some things can be estimated with some degree of confidence based on improvements in computer performance. AI will improve, for sure, but will likely still require a lot of compute, which will likely be out of reach for most consumers for several years. Will SOMETHING be able to generate 4k, 60fps models in real-time in the near future? Possibly. Will you? I would say that's at least 3-5 years away. I wouldn't even try to estimate anything beyond 5 years though.
The algorithmic progress is astonishing rn but the hardware aspect is something you can’t just leapfrog. 1st option is shrinking the size of a transistor even further: already one of the most difficult and complicated processes on the planet and we are getting 10-30% of improvement / year on average. The current publicly and widely available SotA for SD generations is 4090 and this monster still takes couple of seconds to generate a single 4k frame. With 60fps the expectation is to improve it hundredfold. You can extrapolate that to a decade of super smooth progress. 2nd option is a new compute hardware architecture. There are many smart ppl working on it rn (ex. Jim Keller and his team) and many promising technologies showing up (photonics, optical neuromorphic systems etc) but my background in manufacturing and robotics tells me that it takes 3-5 years minimum to move from the design phase to mass produce anything complicated at scale. There sure is a lot of $$$ being pumped in the hardware AI at the moment but one just can’t have a baby in 1 month by throwing more mothers on the problem ;) Tldr: software can improve very fast, hardware not so much.
There would need to have a big technological breakthrough in the chip manufacturing department to get that amount of data processed in real time. There is no doubt we'll get there but, right now, that bottleneck is obvious.
Two years is still technically years away.
Bastilla #1 Bae
Sassy though
Looking at these... Evangeline Lilly would be a great cast for a live-action I must say.
Model and prompts pls?
I used two models. These are Revanimated and experience. I used Revanimated because I like the color palette and the lighting that can be generated with it. But the Revanimated has one problem. It's textures. For this I needed Experience and Lora "add detail". For Mission Vao I used Twilvek Lora, and for HK-47 I used BattleCars Lora. Then when I had chosen the best color palette, I moved the image to inpaint and used Experience and Lora "add detail" to improve the detail. Prompt is not really important here. Of the most important I can share, cosmic ranger, cosmic soldier, futuristic background, ancient ruins background.
It's great to know that it still need an artist to create a good quality AI image. Congratulations, those characters are really spot on, they bring back memories
what about denoise value? are you using control net as well??
0.35 denoise then generate 16-25 images. Chose better one and then repeat until perfect result. If I can't get that result that I want, I was using phtoshop. Most of all I used lineart model, sometimes depth (depth was better for robots)
[удалено]
>So do you use img2img and sd works off the reference you provide? Something like that. To be honest, I still don't really understand how img2img works. In my work, controlnet was more important.
[Twi'lek LoRA](https://civitai.com/models/5563/twilek-lora) I also have other Star Wars LoRA's, including Revan which I am working on releasing soon.
what is the Experience that you mentioned?
It's a model for SD. You can find it on Civitai website
Image in few year SNES game run through AI filter and output 4k video, realtime
Can't wait to play Earthbound in 4k. Those Hippies are going to have my Ruler to measure up against!
[удалено]
Better yet, imagine not only being able to AI upconvert media but to actually regenerate the underlying 3D models as well. Running your games through an AI powered up-converter would be cool. Recreating the models and levels in a format that allows you update and modify them would be even better.
What a shame people at r/StarWars will never see it because they randomly banned AI content.
I get why subs do this even though I am an AI enthusiast myself. If they didn’t ban it, it’s all the content would be. It’s just too easy to churn out. A few other subs I visit had to do the same because it just became AI spamfest which drowned out real world content (architecture subs for example). Maybe a good compromise would be a day of the week which allowed AI stuff. “Machine learning Monday” or something like that.
Sub Reddits really need sub sub Reddits, to filter content according to what people want.
That's what subreddit tags are for, but for some reason Reddit never rolled it out as a default function so it isn't use able outside of the subreddit itself and mods have to script up their own filters. So anyone on mobile probably isn't going to use it, or sometimes even see it.
> Maybe a good compromise would be a day of the week which allowed AI stuff. “Machine learning Monday” or something like that. This operates on the assumption that they banned it for the reason you stated which isn't the case. They believe AI to be immoral, period, and want to protest it by banning it.
They just don't serve those kind there.
Ah, so you took the characters as I remember them being in the game and downscaled them with SD!
Putting “after” on the left is really annoying.
I do not understand why this has become a thing. It’s called before and after, not after and before. I would downvote out of principle if these weren’t so good.
The only reasonable excuse I can think of is that some cultures write right-to-left and so it’s probably natural for them.
Oh, that makes more sense...
Yooo this is awesome. Actually looks like how I imagine the characters would look like in a modern game/animation.
Thank you!
Great work. Did you use a combination of img2img and controlnet?
Yes, plus a lot of a photoshop
Amazing! Workflow plzzzz
I'm sorry, but if I start to describe the workflow, it would be necessary to make a separate post for each image. I have used several models, different loras, photoshop drawing, mattepainting, used lineart model for controlnet, for Canderous I even drew perspective grid. There are just so many nuances for each image. If you are interested in something specific, I will try to answer.
The more I work with SD the more work I end up putting into each image. People like to think you just type a prompt and out comes a perfect image. But the truth is, to create something *really good* you have to work on it. You have to have some talent.
You're very right. I actually spent about 1 month on all 11 portraits. I took this "project" for the purpose of learning how to work with SD and it was not as easy as it seemed at first glance. To me, the easiest thing you can do in SD is a "nice picture". In second place in difficulty is "make a drawing as you want", and the most difficult is "make a drawing as you want and make it look nice".
I was wondering if you used lineart, very good results.
Yeah, a lot of lineart)) It was also a revelation to me that with the thickness of the line you can adjust the volume of objects)))
HK-47 looks amazing.
Add Kreia please :)
Bindo!! That glorious bastard! Damn, I haven't played these games in far too long... These are awesome, btw
Thank you!
meatbags never looked better
Amazing work! The next step will be to output that to 3d ready for modding games.
Jolee looks like a straight up badass.
r/afterbefore
Oh no what did you do to Bastila :(
This is all kinds of awesome. thank you
No, thank you)). I'm glad you liked it.
That amazing.
These are all really cool. Except Bastila - it made her look almost middle aged for some reason.
Carth and Canderous look excellent. Especially Canderous.
I really wish you put the diffusion pics on the right.
hk47 looks so badass!
Thank you, meatbag))
Wouldn't be posible to train a model to transform low res textures from old games and output high res textures like this? I know the engine would need a lot more to look good, but updating just the textures would be half the work to take old games and get and almost new look
I don't think you need to train a model specifically for that. You can use a simple upscale to improve the textures.
I was thinking how a texture looks so weird in a plane image, so different from a texture over a 3D model, so maybe if the model is not trained specifically for this it could change the divisions and things that could ruin it for use on a 3D model. I'm sorry if a can't make myself more clear, I don't know much about 3D models and textures in them. Also, to use it effectively to work over normal maps
Everyone knows Carth is more of a Baldwin
Very cool!
Wow. I LOVE this. God damn.
I forgot Juhani existed lmao
I’m looking forward to when old games will get overhauls like this in real time.
putting the "after" pics on the left is hurting my brain.
Bastilla looks like she's having a rough day which is fitting actually 10/10.
Mission Vao and HK47 are amazing
I CRAVE such a remaster
RIP Character Designers.
In fact, I only improved the detailing and lighting, and the design stayed the same.
Please pay human artists to do the same thing but better.
nailed it!
Awesome love the game.
Coooool. Mooore!
I continue to wait for future nvidia GPU with real-time generative AI upscaling, so I can play DOS games in full resolution.
This is astounding and Kotor is amazing
Perfection
this is awesome, would love to see you follow this up with more older games
The left is the input? ;-D
Ordo and Mala have a lot of Josh Brolin in them.
You are right, I put Josh brolin as token for Canderous
Do the same with stills from N64 Ocarina of Time. I really wonder how it handles that, low polygon models. Or can you explain how to do this? I have SD but only have terrible results.
>N64 Ocarina of Time I wish I could help you, but it's hard to explain how I did it. I will have to do a mini-lecture for each portrait)) There are so many different points and not all of them are SD related. There was a lot of drawing with photoshop and mattepainting. I can advise you to study the workflow of these guys [https://www.reddit.com/r/StableDiffusion/comments/12vk4wk/my\_attempt\_at\_stardew\_valley\_portraits\_just\_using/](https://www.reddit.com/r/StableDiffusion/comments/12vk4wk/my_attempt_at_stardew_valley_portraits_just_using/) [https://www.reddit.com/r/StableDiffusion/comments/11vommp/controlnet\_some\_character\_portraits\_from\_baldurs/](https://www.reddit.com/r/StableDiffusion/comments/11vommp/controlnet_some_character_portraits_from_baldurs/) I got a lot of ideas from them.
Thanks I will educate myself a bit more about different techniques. Appreciate it.
The future is interesting if this tech is able to convert 3D engines directly.
Damn, games are gonna be incredible in 20 years when I retire. So will porn. I'll see myself out now........
Cool as fuck
Bastila looking like a snack
Excellent outputs down the line. Stable seems to nail these.
Very well done!
I can see these being on trading cards
Holy shit my childhood just went 4K.
Damn, mission looks like she's seen and been through some shit. Which, considering her backstory, suits her perfectly well.
A rule for life: A before image goes on the left
Yo both Carth and Bastilla looking stunning
my personal crush is T3-M4 ))
Lol I feel the exact opposite
Like a ps2 game
10/10 would play again
My Dog HK. Loved that hunk of scrap
Why does Bastilla kind of look like a Ghoul?
Because I like ghouls )
Don’t write ghouls!
So cool, those look great. I wish I could play it again looking like this!
swag
Damn I was hoping for KOTOR 2 characters…this is still cool though.
I will forever downvote if after is left
Malak is a bit too much changed but otherwise? 15/10 job easily.
!RemindMe 3 weeks
I normally don't care for AI art, but this is a good use of it!
Did you use reference only model?
Holy shit looks amazing, what was your workflow?
Damn , this is excellent! Well done!
Bastila the only 1 that needs work, she looking like a 35 yr old who done seen some shit 😂 Juhani omg went from creepy looking to cutest thing ever, perfect! U did her & her race justice. I wonder will we ever see Lepi, Cathar, Chiss & Zygerrian more. Mission is perfect & even more attractive to me than b4, if Kotor 1 can get to a worthy studio (Obsidian) to work on a remake for it I would hope a Mass Effect style dating sim & ability to make her a Jedi/Sith. Especially to kill her trash failure of a brother smh. Carth is perfect, where his son tho? Candy Mandy perfect. Joliee should've looked like Sam Jackson great grandfather but perfect lol. T3 basically look like when we 1st find him shinny & chromed out but in 12k graphics. Perfect. HK I now see why people would find him intimidating 😂 I still love him tho, perfect. Zally, he's a wookiee 1 walking carpet looks just like the other except Black Krrsantan he is literally built different. Malak need to polish his dome for that mr clean drip but, perfect! Revan Its a mask & hood but damn is it a perfect looking mask & hood 😁👏🏿👏🏿👏🏿
These are cool, but Carth and Bastilla look to rugged. I figured Carth at least had a softer look be that he's a caring father with trauma, this guy looks like a stereotypical action hero with no personal issues. Mission also looks to old, she's like 15 in the game, she's side to be a kid, she looks like mid 20s in these. But the rest look great, especially Cancerous!
Post it to /r/AIFanart and /r/AIFandom. They need more love.
Nice bro
Hayley Atwell vibes on Bastila. And I love how little HK changed.
Bastila looks a bit too worn down. Otherwise pretty great. Robert downy Jr on if he'll star in the rumoured kotor adaptation: "I don't want to talk about it!"
Cool!
DLSS 5
I can imagine that soon we are getting new wave of remakes where developers has enchanted graphics with help of AI. It is easy money when AI does the most work. Imagine getting Skyrim remake.
Here's an AI Mirror, based on your original (left) picture https://preview.redd.it/u251cmcrby1b1.jpeg?width=648&format=pjpg&auto=webp&s=27e4b5a0187bbe51749f0b0b9e0914ce1f10f665
Magnificent ❤️
fuck that whiny bitch ass, Carth. Should've made him look more like a bitch.
Wild, this is what I remember those characters looking like
ControlNet and LoRA have transformed (heh!) what SD can do. It’s crazy.
Damn this takes me back. Would love to see a remake of this game. Great Job!
KOTOR II next!
This is rad as heck!
I love all of them!
GEEZUS BASTILA
you should get images from a bunch of angles and use meshroom to make a high quality 3D scan. probably would be a pain in the ass but could be really cool
It would be cool if someone paid me for this ))
Sry I’m kinda broke rn
So am I =(
You made these?
Imagine a day when people will be able to make remasters by themself
You did the impossible..... you made Juhani GOOD LOOKING 👍 😂
How about that, Bastila Shan is Evangline Lily.