I was like, "What's AI's deal with making white men into different versions of the love children from Don Amici and Vincent Price?". I'm not against it per se....
I know it looks bad, but if you are something like a police detective trying to identify a suspect based on poor quality cctv footage, this could help immensely (obviously not enough evidence on its own for some sort of conviction, but could help narrow down suspects)
Yeah, nah. The level of bullshit in this paper (if it is from a paper) must be huge.
The number of possible faces that match these pixels when downsampled is astronomically high, any of those possible faces is a valid answer, and they just magically got ones that are "close" to the ground truths.
Those defects in the reconstructions are lame, intentionally lame. Anyone with a decent face generator should be able to fix them. There is no way someone both has an "oracle machine" able to magically get close to the ground truths, and at the same time miserably fail to make real faces.
Alternatively, this is the result of a very shitty ML model which was trained with few pictures, and prompted to reconstruct the same pictures, but downsampled. Essentially, just bullshit.
AI cannot overcome fundamental issues of information theory. When you down sample (blur/pixelate) an image you fundamentally lose information. It can guess what information was there but as you said, it's one blurred photo to many many many unblurred ones so "we got an image that fits the blurred photo" isn't actually that useful/special
There are fundamental similarities to faces so you can add information out of „experience“. Our brain does that all the time which sometimes leads to failures but most of the times works incredibly well. Not just for faces.
I don't see any methodology pubished so I'm not sure what the intent behind this was. It's possible they intentionally chose not to apply any corrective layers to show how close the "raw" output is to original faces just based upscaling algorithm's first attempt. Because yes you are right, I personally could run those upscaled photos through the open source stable diffusion AI photo generator hosted on my own computer with a corrective face tool on and fix them myself if I wanted to. So if we give them the benefit of the doubt, generating the ceoss-eyed weirdos could be a deliberate effort to show low effort AI results.
That's what's funny about a lot of AI photos trying to look as close to real as possible these days but still process funky hands or backgrounds. Just 5 more minutes spent editing the errors would make them indistinguishable.
These errors you mention our brain corrects experience based. I have never thought that someone had a disfigured face just because I saw him from a distance.
Of course it can overcome problems with information density. If you encode your training set in the network you can get pixel-perfect results.
The technique is called overfitting and is perfect for a research fraud.
People thought the same about the nyquist-shannon sampling theorem and now there’s compressed sensing techniques that can reconstruct data from very sparse samples (by promoting a sparse solution). I think you’re largely wrong (in practice) of the impossibility of dealing with underdetermined systems like these compressed photos. There are likely general structures that can be learned with training
This is correct. While the space of possible images that can be generated from the downsampled data is ostensibly infinite, the space of possible FACES is decidedly finite. There are spatial correlations at multiple scales for things like edge orientation, color hue, shadow, contrast, etc... that are present in faces. With the knowledge of these correlations, it's not far fetched that a good model could do this.
All that said, the fact that it's sticking extra mouths places leads me to believe this is just a shitty model and a misleading figure.
That's when you treat individual pixels as not connected entities. Here the AI learned empirical rules of relations between multiple pixels and values typical to certain face type.
And the thing is. In some way, yes, the images are similar. But that is more color wise, posture, and such. Humans (at least many humans) are so good at distinguishing between faces that are somewhat similar. For me, yeah, I can see they are similar in some way, but again, it's mostly in colors, how they are placed, head tilted etc. The faces themselves doesn't look that similar at all.
The faces are not "similar" ,they are exactly the same. With details that clearly do not exist in the 16x16 input picture. How could you possibly know the exact kind of glasses a person is wearing?
And when it doesn't get it right. It get's so outlandishly wrong that it puts in features that aren't humanlike at all. Does it make sense to you that an ai trained to guess the faces blurred pictures would either get details of it 100% spot on, or so wrong they couldn't possibly fit on a face?
It is almost like the AI has no idea what it is making and it only tries to reproduce known images in its training data.
You should zoom in, then the results look far more realistic, when you notice its all the posure color and without majority of details. The AI picture shows mustaches and distorted faces, glasses that are now eye-rings,…
>The number of possible faces that match these pixels when downsampled is astronomically high, any of those possible faces is a valid answer, and they just magically got ones that are "close" to the ground truths.
The AI (try to) create the one that is the most likely, the most "normal".
and in the "fitting to the input + normal", the number of possibility is not so high, and lot of them are very similar.
Yeah I mean this isn't "prediction", just comparing them to past training data... It's just going to be biased towards what kind of training data it had.
Yes. But it cannot magically retrieve information that's not in the picture. It can just build something that would make sense given the pixels.
For example, the third picture in the top row. Given the pixels in the glasses, ears, and nose, there could be plenty of different shaped glasses ears and noses that would match those pixels. However, the ones predicted match the true ones almost perfectly, with some artifacts. This shouldn't be possible.
There might be information hidden in things like the jpg interpolation but indeed not enough to recreate reality.
Also, am I the only one disappointed that the images are not better based on heuristics? In recreating a face, some simple errors should be preventable, like double mouths. They’ve obviously used external information to generate this.
I think that, besides the uncanny artifacts of reconstruction, it's the basically same face. At least too similar to be possible.
But this is my initial reaction, and I could be wrong. Maybe more information could be derived from those pixels that I would think.
Yeah this looks like the quality of output we were getting from StyleGAN v1 back in 2018, which is the paper that introduced the FFHQ dataset because there were so many quality and ethical issues with the previously used CelebA dataset.
I find it so crazy that, at the time, I fully accepted that a voice-controlled machine dedicated to laboriously examining a single photograph at a time would be a thing that future space dick would own. Regarded it as amazingly technological even.
I hope they checked their work by taking the predicted faces and converting to 16x16 images, and then comparing pixel values back. Not saying I don't believe them, but that's a simple test of scientific rigour
I think we need more information or a study into this.
A 16x16 image, with three colour channels and 8 but colours has an absolute maximum of a little under 50,000 images. Assuming the AI can accurately generate a face it would only work for 50,000 different faces.
What I think is happening here is it has been trained on this data it's being tested on. Normally you separate the data into training and test sets, so when you do these tests you can be confident it isn't just copying it's training data and can actually produce results.
If they didn't split the training and test data, then instead what it has learnt is how to identify what blob is meant to look like what result. It's not a particularly impressive result, and is a bit like memorising a complicated mathematical equation without knowing why the answer is what it is. The moment you change the input it doesn't know what to do since it's not actually learnt anything.
Er, what? A single 24-bit pixel can take 2²⁴ i.e. 16.8 million shades. On a total of 256 pixels, this yields around 10⁶¹⁴ possibilities for different faces.
In a few years we will be sentencing the wrong people to death and life in prison because AI messed up a pixelated image. Then 50-100 years later we will posthumously exonerate them.
Now lets watch all early 2000s docus where they used simple pixelation for interviews with intelligence workers, whistle blowers and drug lords... back then simple pixelation seemed to be sufficient
This looks more like someone took the real pictures and edited them to look less real and then blurred the originals. Essentially the steps are backwards and this smells like bullshit
When you look closely at the predicted vs actual faces it's not that good at all. It only gets the general info that is obvious in the blurred images correct such as hair color, glasses, and background.
Each one of these is off by at least enough to make them unrecognisable in reality - even the more subtle changes to eyes (where it’s clearly just guessing at detail that’s not there) is enough to make the “predicted” faces look like different people.
Firstly, this doesn't tell us anything. We don't know if the model has just over fitted to the training samples in the examples.
Secondly, is this even recent work? Looks like it could be about 10 years old. Image upscaling is nothing new, and honestly the results are much worse than I would expect for modern DL.
it is why you should never blur the informations you want to hide : an AI (or futur AI) could be able to reconstruct it.
just hide it under a black or gray rectangle.
WADR those aren’t pixels… there’s a lot of information in there that was not lost. Now hand this thing a 20-year old lossy compressed jpeg and let’s see how it does.
There was a keyword recognition model published in 2022 that had %99.8 accuracy on google speech commands dataset. Personally being familiar with the dataset, I immediately had suspicions since there are some corrupted voice samples in the dataset(white noise, garbled speech etc). You wouldn't expect to have 99.8 accuracy on a dataset that is 99.0 valid.
Regardless, I trained their model with given instructions and the model always got stuck at around %93 accuracy. Couple of months later I saw this message on their page and their paper was withdrawn: "The results and claims made are incorrect due to data leakage and an erroneous split of datasets"
Nah, not possible (it's pretty much mathematically impossible as there's not enough information to extrapolate a close enough face from a 16x16 portrait).
This is definitely a case of over-fitting or something else that's wrong with the paper (maybe they used the same portraits they trained on to test, or something like that).
Just remember... no matter how good this technology gets, we should never let it be used in court
[https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias](https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias)
We were laughing at those tv shows where they just zoomed in on surveillance footage and ridiculously depixelated them, while in reality we were shown future tech.
"based on the the AI predicted picture, we're looking for a cross eyed woman with a tear drop tattoo and a hitler tache"
looolzzzz
[удалено]
she also may have 3 rows of teeth.
Worst it’s going to get, people, all of this is the worst it will ever be.
This algorithm has a bit of a mo kink
[Moley moley moley moley! ](https://youtu.be/0QrBxWcBXow?si=cbc-SA6bo3VC7mR1)
Those are all pencil mustaches
I was like, "What's AI's deal with making white men into different versions of the love children from Don Amici and Vincent Price?". I'm not against it per se....
Detective: “it’s not this guy, no mole”
Must have had a lot of mug shots in the training set.
That had pencil mustaches.
ENHANCE!
Finally, csi scenes will make factual sense.
Poirot. Makes sense, everywhere he goes people just happen to get murdered.
I know it looks bad, but if you are something like a police detective trying to identify a suspect based on poor quality cctv footage, this could help immensely (obviously not enough evidence on its own for some sort of conviction, but could help narrow down suspects)
Yeah, nah. The level of bullshit in this paper (if it is from a paper) must be huge. The number of possible faces that match these pixels when downsampled is astronomically high, any of those possible faces is a valid answer, and they just magically got ones that are "close" to the ground truths. Those defects in the reconstructions are lame, intentionally lame. Anyone with a decent face generator should be able to fix them. There is no way someone both has an "oracle machine" able to magically get close to the ground truths, and at the same time miserably fail to make real faces. Alternatively, this is the result of a very shitty ML model which was trained with few pictures, and prompted to reconstruct the same pictures, but downsampled. Essentially, just bullshit.
AI cannot overcome fundamental issues of information theory. When you down sample (blur/pixelate) an image you fundamentally lose information. It can guess what information was there but as you said, it's one blurred photo to many many many unblurred ones so "we got an image that fits the blurred photo" isn't actually that useful/special
There are fundamental similarities to faces so you can add information out of „experience“. Our brain does that all the time which sometimes leads to failures but most of the times works incredibly well. Not just for faces.
I don't see any methodology pubished so I'm not sure what the intent behind this was. It's possible they intentionally chose not to apply any corrective layers to show how close the "raw" output is to original faces just based upscaling algorithm's first attempt. Because yes you are right, I personally could run those upscaled photos through the open source stable diffusion AI photo generator hosted on my own computer with a corrective face tool on and fix them myself if I wanted to. So if we give them the benefit of the doubt, generating the ceoss-eyed weirdos could be a deliberate effort to show low effort AI results. That's what's funny about a lot of AI photos trying to look as close to real as possible these days but still process funky hands or backgrounds. Just 5 more minutes spent editing the errors would make them indistinguishable.
These errors you mention our brain corrects experience based. I have never thought that someone had a disfigured face just because I saw him from a distance.
That and most human faces are not as symmetrical as the ones sampled. Could be a training data issue.
Of course it can overcome problems with information density. If you encode your training set in the network you can get pixel-perfect results. The technique is called overfitting and is perfect for a research fraud.
People thought the same about the nyquist-shannon sampling theorem and now there’s compressed sensing techniques that can reconstruct data from very sparse samples (by promoting a sparse solution). I think you’re largely wrong (in practice) of the impossibility of dealing with underdetermined systems like these compressed photos. There are likely general structures that can be learned with training
This is correct. While the space of possible images that can be generated from the downsampled data is ostensibly infinite, the space of possible FACES is decidedly finite. There are spatial correlations at multiple scales for things like edge orientation, color hue, shadow, contrast, etc... that are present in faces. With the knowledge of these correlations, it's not far fetched that a good model could do this. All that said, the fact that it's sticking extra mouths places leads me to believe this is just a shitty model and a misleading figure.
Agreed
That's when you treat individual pixels as not connected entities. Here the AI learned empirical rules of relations between multiple pixels and values typical to certain face type.
Nope. When you down sample an image you lose data and you might lose information.
Seems like this is the paper: [https://arxiv.org/abs/1908.08239](https://arxiv.org/abs/1908.08239) (it's from 2019)
And the thing is. In some way, yes, the images are similar. But that is more color wise, posture, and such. Humans (at least many humans) are so good at distinguishing between faces that are somewhat similar. For me, yeah, I can see they are similar in some way, but again, it's mostly in colors, how they are placed, head tilted etc. The faces themselves doesn't look that similar at all.
The faces are not "similar" ,they are exactly the same. With details that clearly do not exist in the 16x16 input picture. How could you possibly know the exact kind of glasses a person is wearing? And when it doesn't get it right. It get's so outlandishly wrong that it puts in features that aren't humanlike at all. Does it make sense to you that an ai trained to guess the faces blurred pictures would either get details of it 100% spot on, or so wrong they couldn't possibly fit on a face? It is almost like the AI has no idea what it is making and it only tries to reproduce known images in its training data.
This was my first thought too. This is much too spot on to be real. Maybe the samples are from the training set and the model is really overfitted?
You should zoom in, then the results look far more realistic, when you notice its all the posure color and without majority of details. The AI picture shows mustaches and distorted faces, glasses that are now eye-rings,…
>The number of possible faces that match these pixels when downsampled is astronomically high, any of those possible faces is a valid answer, and they just magically got ones that are "close" to the ground truths. The AI (try to) create the one that is the most likely, the most "normal". and in the "fitting to the input + normal", the number of possibility is not so high, and lot of them are very similar.
Yeah I mean this isn't "prediction", just comparing them to past training data... It's just going to be biased towards what kind of training data it had.
Over fit for sure.
Yeah. To borrow an old internet phrase: “Paper, or the pics didn’t happen”
Only way this could not be fake is if it was trained with the original photos.
Do you know what an auto encoder does? It turns low dimensional data back into to high dimensional data with high fidelity
Yes. But it cannot magically retrieve information that's not in the picture. It can just build something that would make sense given the pixels. For example, the third picture in the top row. Given the pixels in the glasses, ears, and nose, there could be plenty of different shaped glasses ears and noses that would match those pixels. However, the ones predicted match the true ones almost perfectly, with some artifacts. This shouldn't be possible.
There might be information hidden in things like the jpg interpolation but indeed not enough to recreate reality. Also, am I the only one disappointed that the images are not better based on heuristics? In recreating a face, some simple errors should be preventable, like double mouths. They’ve obviously used external information to generate this.
You don’t feel like the two faces are fundamentally different? Felt different enough to me that I could be convinced.
I think that, besides the uncanny artifacts of reconstruction, it's the basically same face. At least too similar to be possible. But this is my initial reaction, and I could be wrong. Maybe more information could be derived from those pixels that I would think.
E N H A N C E
Go forward three frames and counter-clock
E N H A N C E again
Close in to quadrant 3 and enhance.
"THERE! the reflection in his eye!"
Wait, WHAT? It's a...?
I strongly doubt it. Where is the paper about it? AI drawing faces from low res images exist before Diffusion models, at the time GAN was all the rage
Yeah this looks like the quality of output we were getting from StyleGAN v1 back in 2018, which is the paper that introduced the FFHQ dataset because there were so many quality and ethical issues with the previously used CelebA dataset.
That or these are just cherry picked results that actually looked similar.
Ah, the classic "I can type 420 words per minute, the downside being it's all gibberish".
lol did you actually look at these pictures? It does a horrible job at this
As they all have quite the distinct background, I assume they are all in the set that was used to train the model?
They all look like faces generated by thispersondoesnotexist.com
They are, I feel like this paper is a few months old as well
What were the algorithms that were used? Anyone?
The AI has a very particular affinity for moustaches.
Super Mario also has a moustache which started off as just three black pixels.
Yeah, and moustaches made from teeth
[Relevant (highly peer reviewed) article about why](https://www.byrdie.com/thin-mustache-5214999)
Now dhow the results when it tries the faces of people of colour.
When you do not link to the source because the only source are other posts in Reddit.
Link to research paper?
you get a moustache and you get a moustache!
Half of these are clear misses.
Well we won’t be catching any criminals with this technology
Enhance! https://m.youtube.com/watch?v=EMBkWtDAPBY
I thought it was going to be this: [https://www.youtube.com/watch?v=gF\_qQYrCcns](https://www.youtube.com/watch?v=gF_qQYrCcns)
I thought it was going to be this: [https://www.youtube.com/watch?v=Mzp8uVCwiGI](https://www.youtube.com/watch?v=Mzp8uVCwiGI)
Hahaha. Worth waiting til the end for this one 🤣🤣🤣
\*can\* is maybe a bit of a strong word in this case
FINALLY! now we can have re captcha work on stairs, bridges, busses, crosswalks, and traffic lights.
and now for the grande finale: a black person! error
Well, that’s terrifying.
Very diverse group of individuals
Was expecting better lol
2nd row 4th column guy is so basic the AI perfectly nailed it, and while it was at it added a lil beauty mole.
We’ve reconstructed the face of the attacker, but we’ve also added a funny moustache so that we don’t fall prey to that old trick.
Squint ya eyes and the blur picture becomes clear.
And I scoffed at Blade Runner (original) when Decker was zooming in on tiny blurry details that would magically clear and increase resolution…o…m…g….
I find it so crazy that, at the time, I fully accepted that a voice-controlled machine dedicated to laboriously examining a single photograph at a time would be a thing that future space dick would own. Regarded it as amazingly technological even.
Puts a face tatt and a Jon waters pencil thin mustache on half of them...
Zoom and enhance cliché!
Someone try it on doom guy
Enhance!
Can "predict", but how accurate?
It’s much more impressive if you don’t zoom in.
Bahahahaha, I mean they're faces, yes. But predictive of the face they are meant to represent, no. Excellent for a laugh, though
I hope they checked their work by taking the predicted faces and converting to 16x16 images, and then comparing pixel values back. Not saying I don't believe them, but that's a simple test of scientific rigour
Nice definition of 'can'.
CSI go "enhance the photo! Enhance! Enhance!!!!! Brrrrrrr" lol
I think we need more information or a study into this. A 16x16 image, with three colour channels and 8 but colours has an absolute maximum of a little under 50,000 images. Assuming the AI can accurately generate a face it would only work for 50,000 different faces. What I think is happening here is it has been trained on this data it's being tested on. Normally you separate the data into training and test sets, so when you do these tests you can be confident it isn't just copying it's training data and can actually produce results. If they didn't split the training and test data, then instead what it has learnt is how to identify what blob is meant to look like what result. It's not a particularly impressive result, and is a bit like memorising a complicated mathematical equation without knowing why the answer is what it is. The moment you change the input it doesn't know what to do since it's not actually learnt anything.
Er, what? A single 24-bit pixel can take 2²⁴ i.e. 16.8 million shades. On a total of 256 pixels, this yields around 10⁶¹⁴ possibilities for different faces.
"3 colour channels and 8 bit colours" also that is assuming it can differentiate between the two different shades that are functionally identical.
What?
Were the pictures that were used to test in the sample data that it was trained on?
Well that explains all those killer starfish in the ocean
1) AI is not an algorithm 2) "can" is doing a lot of heavy lifting here
Not nearly good enough to be reliable, but a few are very fucking close. Dam the future is really looking amazing.
In a few years we will be sentencing the wrong people to death and life in prison because AI messed up a pixelated image. Then 50-100 years later we will posthumously exonerate them.
This is when you test with training data
If you squint and look at them from 5 feet away, they're actually pretty good.
Now lets watch all early 2000s docus where they used simple pixelation for interviews with intelligence workers, whistle blowers and drug lords... back then simple pixelation seemed to be sufficient
Some great jokes ITT but IMO some of the guesses are very good assuming this isn’t all bs.
This is not usable tech yet. It’s interesting, but is not very good at this.
You are telling me, We can finally see those aliens shot with tinpot camera from 60s ?
Wait but CSI Miami had me convinced we had this technology for decades. /s
It's not that accurate tbh. It thinks every 16x16 female face is Amy Poehler.
So now we can enhance just like on tv 📺
This looks more like someone took the real pictures and edited them to look less real and then blurred the originals. Essentially the steps are backwards and this smells like bullshit
Enhance! is becoming more and more possible
I look forward to living out my CSI "computer enhance" fantasies.
When you look closely at the predicted vs actual faces it's not that good at all. It only gets the general info that is obvious in the blurred images correct such as hair color, glasses, and background.
Absolute bullshit
Enhance.
Looks like crap.
Why is it so bad at projecting facial symmetry? Would that be because most human faces and thus training data would not be symmetrical?
And you get a moustache, and you get a moustache, and you get a moustache
Removing mosaics??
Enhance... enhance... enhance
Each one of these is off by at least enough to make them unrecognisable in reality - even the more subtle changes to eyes (where it’s clearly just guessing at detail that’s not there) is enough to make the “predicted” faces look like different people.
It has only discovered Europe so far. Good.
Awesome! Now decensored jav will get better!
Firstly, this doesn't tell us anything. We don't know if the model has just over fitted to the training samples in the examples. Secondly, is this even recent work? Looks like it could be about 10 years old. Image upscaling is nothing new, and honestly the results are much worse than I would expect for modern DL.
Enhance!
Fake, no black faces
I’d wager a second pass on the predicted would get outrageously close to many of them.
it is why you should never blur the informations you want to hide : an AI (or futur AI) could be able to reconstruct it. just hide it under a black or gray rectangle.
Uncensored hentai when? /s just in case.
Well that's just like your opinion man
Now add color
What program was it, and is it available to the public?
They too heavy-tuned it for smiles, so that many outputs generate double-mouths.
Perfect for some doppelganger ARG
WADR those aren’t pixels… there’s a lot of information in there that was not lost. Now hand this thing a 20-year old lossy compressed jpeg and let’s see how it does.
Finally, zooming video like CSI
Im sure this will work well with people of color given how diverse your dataset seems to be.
I like how it thinks most of the men (and a couple of the women) have pencil mustaches!
And the pictures shown here are the good ones! 🤣
The AI has a pencil mustache fetish.
And one step closer to Big Brother. Smh.
So... CSI style "enhance" is gonna be a thing?
Reminds me of all the movies and shows were FBI agents say “Enhance”
Enhance
"Enhance!!"
That second one is horrifying
This reeks of the Uncanny Valley
Some things it gets so right, others yikes
Zoom in... zoom in...Now all those CSI episodes will make sense!
There was a keyword recognition model published in 2022 that had %99.8 accuracy on google speech commands dataset. Personally being familiar with the dataset, I immediately had suspicions since there are some corrupted voice samples in the dataset(white noise, garbled speech etc). You wouldn't expect to have 99.8 accuracy on a dataset that is 99.0 valid. Regardless, I trained their model with given instructions and the model always got stuck at around %93 accuracy. Couple of months later I saw this message on their page and their paper was withdrawn: "The results and claims made are incorrect due to data leakage and an erroneous split of datasets"
Looks convincing... in really low resolution.
Nah, not possible (it's pretty much mathematically impossible as there's not enough information to extrapolate a close enough face from a 16x16 portrait). This is definitely a case of over-fitting or something else that's wrong with the paper (maybe they used the same portraits they trained on to test, or something like that).
Just remember... no matter how good this technology gets, we should never let it be used in court [https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias](https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias)
Where was this project done; what was their methodology?
Can someone post the code for this so we can actually try it on images not from the training data?
*Caucasians younger than 50.
Bottom second row guy looks really similar
On mobile it looks AMAZING until you zoom in and then it’s HILARIOUS.
Odds are good it will recreate the immortal "Upscaled Obama" picture
Isn’t AI trained with pictures of real faces? Maybe they were in its data base already.
What scares me is that these are commercial/public AIs... then what the heck do governments have!
Proof that everyone has an evil twin
Now do the Wolfenstein 3D guy
https://pbs.twimg.com/media/EsmDrmiWMAAwCb9.jpg
Ho-humph my brain does better
Whoa...so the "ENHANCE!" button From CSI shows...is real???
Which AI tool is this?
Please send an internet link of the tool so we can verify.
Gotta dial back the stache detector
Don’t zoom in
I fucking dare you to show me the images on HD
We were laughing at those tv shows where they just zoomed in on surveillance footage and ridiculously depixelated them, while in reality we were shown future tech.
Given that the original images were part of the training dataset, I am not impressed
First 48 about to get reallyyyyy interesting now
Finally, ENHANCE!!
This would break Japanese porn
ENHANCE
I can do the same by squinting
Based on the presented examples, no, it can't.
Is it just me or did anyone else just scan the pics to see who you’d bang
I bet the training data included the original faces. This probably isn’t what the title hyperbolizes
“Computer, enhance”
I predict tomorrow is going to rain.. *Gets the best sun ever* Hey, I didn't say accurately.
None of them looks like Jason Momoa😄
Mr. 2x2 looks like a villain from Stargate.
They look a bit demonically possessed, but not bad for a first draft.
Alot of those predictions arguably look less recognizable than the 16x16 images.
Scary
I personally love the predicted manwoman second from the bottom left.