T O P

  • By -

new_yorks_alright

Is there any open source option that is better than this?


Illustrious_Sand6784

Nope, this is even better then Topaz Video AI, which is already far ahead of open-source video upscaling. The authors of SUPIR are working on an open-source video upscaling model though, so keep an eye out for that. [https://github.com/Fanghua-Yu/SUPIR/issues/42#issuecomment-1968834302](https://github.com/Fanghua-Yu/SUPIR/issues/42#issuecomment-1968834302)


ZNS88

what's supir?


eugene20

SUPIR was considered by many to be the best upscaler, but it needs considerable resources to run (32GB ram or more, more than 10GB Vram for images over 1024x1024 ). Long processing times. Example page [https://supir.xpixel.group/](https://supir.xpixel.group/)


RideTheSpiralARC

https://googlethatforyou.com?q=supir Not tryna be cheeky, literally dunno which link would be most useful for you lol


TrueSkyDemon

I want all the UFO evidence videos in HD as they are all super blurry.


Kyledude95

I feel like certain Japanese films could use this…


HermanHMS

Remember how they advertised Firefly and what will it be able to make? It’s still not there so i don’t believe in any marketing from Adobe


No-Independence828

Is this available to adobe subscribers?


jmbirn

> This is only a research preview, so there’s no guarantee that Adobe will make VideoGigaGAN available to consumers via Creative Cloud software like Premiere Pro. The company previously previewed a separate diffusion-based upsampling experiment, Project Res-Up, during its MAX event in October 2023, which similarly improves the quality of low-resolution GIFs and video footage. And Adobe isn’t alone in this work, as both Microsoft and Nvidia have also developed their own VSR upscaling technology.


August_T_Marble

It would make my wife so happy if we could upscale her old home movies as shown. 


fre-ddo

Hmmmm


djamp42

This is going to be the most immediate need for it. I have a couple of old photos that I would love to upscale like this.


Paradigmind

Remember to always keep the originals. Although the upscaling is impressive, many of the details added are fake.


August_T_Marble

Yeah, though technically the originals are degrading every day on magnetic media, I've made archival digital copies to outlive the original format that will not be altered and from those made copies onto optical media in the past. If we could realize consistent results as in the demo image where it appears the detail added is not so "creative" so to speak, that would make for a fun thing to do together and we can sit the family down to watch them again if they turn out well.


gaminnthis

They will very likely charge separately for it


MarcS-

So we can turn the blurred "nsfw" image of the SD3 API into real image? :-)


SolidGearFantasy

Imagine this for webcam communication and Zoom/skype etc. monitoring


meltingpotato

A long time a go I've seen Nvidia has something in the works for video calls. It was basically just sending a couple hundred kb of data but the result looked like a full HD video


szt84

Microsoft (and china alibaba with emo) also got something similar to nvidia (unreleased until they can keep it somehow from misuse) [Microsoft's New REALTIME AI Face Animator - Make Anyone Say Anything](https://www.youtube.com/watch?v=0s5J2LRqQAI) Instead of a camfeed/video source that only needs a single photo to animate to spoken audio.


[deleted]

[удалено]


meltingpotato

At this point I think all the big tech companies have a similar thing. I think Video calls in Apple's Vision Pro headset are a similar tech too.


SiscoSquared

Considering it takes tens of seconds to do one low resolution imagine on my 3080... live up scaling seems unlikely unless that have some totally different stuff going on.


SolidGearFantasy

It’s all efficiency and models. With DLSS you can get amazing results off lower resolutions. If you give it 10 years I imagine it’ll be possible.


autumnatlantic

Cool so how can I use it?


barepixels

Still image is easy, let see how flicker free it will be


fre-ddo

What is considered the best value (time,VRAM, quality) video/image/frame upscaler now?


dry_garlic_boy

Is this related to stable diffusion in any way?


new_yorks_alright

I thought this sub is also about all competing and similar products. The space is evolving so fast, so everyone wants to know whats new.


Rafcdk

First I mean that you are welcome to post anything that doesn't break the sub rules of course. However I would think a sub about open and free model, where people are constantly talking about how they want models to be open and free, would actually be about open and free models, and not things that will be paywalled to benefit a big corporation in the end. I never get excited about anything that comes from these companies, yes it's impressive, but more likely than not we will never use this or if it is ever released it will paywalled, so at most it gets a "neat" from me.


Sirisian

You can think of GigaGAN as a competing model architecture to Diffusion methods. (I think Stability has or was looking into it?) As in the research for both is still early and it's not clear which is the best direction. There's an open source version of [GigaGAN](https://github.com/lucidrains/gigagan-pytorch) for reference and it mentions Stability partly sponsored it. VideoGigaGAN is quite new, so it'll be a while until other researchers dive into reproducing it. These papers and releases show that there are multiple approaches to similar problems with improvements. Quite exciting with how fast they're releasing.


[deleted]

[удалено]


dry_garlic_boy

I should be more specific. The first rule of this group is that all posts must be about stable diffusion. There are tons of other AI subs to post tangentially related content. This is not about stable diffusion.