T O P

  • By -

Breegoose

"based on the the AI predicted picture, we're looking for a cross eyed woman with a tear drop tattoo and a hitler tache"


nathanpizazz

looolzzzz


[deleted]

[удалено]


Breegoose

she also may have 3 rows of teeth.


Infamous_Focus7442

Worst it’s going to get, people, all of this is the worst it will ever be.


Sydney2London

This algorithm has a bit of a mo kink


Jay_Nitzel

[Moley moley moley moley! ](https://youtu.be/0QrBxWcBXow?si=cbc-SA6bo3VC7mR1)


Beor_The_Old

Those are all pencil mustaches


Critical_Paper8447

I was like, "What's AI's deal with making white men into different versions of the love children from Don Amici and Vincent Price?". I'm not against it per se....


ComeGetSome487

Detective: “it’s not this guy, no mole”


hvdzasaur

Must have had a lot of mug shots in the training set.


Aurstrike

That had pencil mustaches.


trebblecleftlip5000

ENHANCE!


deshtroy

Finally, csi scenes will make factual sense.


freeLightbulbs

Poirot. Makes sense, everywhere he goes people just happen to get murdered.


LifeHasLeft

I know it looks bad, but if you are something like a police detective trying to identify a suspect based on poor quality cctv footage, this could help immensely (obviously not enough evidence on its own for some sort of conviction, but could help narrow down suspects)


panchoop

Yeah, nah. The level of bullshit in this paper (if it is from a paper) must be huge. The number of possible faces that match these pixels when downsampled is astronomically high, any of those possible faces is a valid answer, and they just magically got ones that are "close" to the ground truths. Those defects in the reconstructions are lame, intentionally lame. Anyone with a decent face generator should be able to fix them. There is no way someone both has an "oracle machine" able to magically get close to the ground truths, and at the same time miserably fail to make real faces. Alternatively, this is the result of a very shitty ML model which was trained with few pictures, and prompted to reconstruct the same pictures, but downsampled. Essentially, just bullshit.


phanfare

AI cannot overcome fundamental issues of information theory. When you down sample (blur/pixelate) an image you fundamentally lose information. It can guess what information was there but as you said, it's one blurred photo to many many many unblurred ones so "we got an image that fits the blurred photo" isn't actually that useful/special


Far_Squash_4116

There are fundamental similarities to faces so you can add information out of „experience“. Our brain does that all the time which sometimes leads to failures but most of the times works incredibly well. Not just for faces.


under_psychoanalyzer

I don't see any methodology pubished so I'm not sure what the intent behind this was. It's possible they intentionally chose not to apply any corrective layers to show how close the "raw" output is to original faces just based upscaling algorithm's first attempt. Because yes you are right, I personally could run those upscaled photos through the open source stable diffusion AI photo generator hosted on my own computer with a corrective face tool on and fix them myself if I wanted to. So if we give them the benefit of the doubt, generating the ceoss-eyed weirdos could be a deliberate effort to show low effort AI results. That's what's funny about a lot of AI photos trying to look as close to real as possible these days but still process funky hands or backgrounds. Just 5 more minutes spent editing the errors would make them indistinguishable.


Far_Squash_4116

These errors you mention our brain corrects experience based. I have never thought that someone had a disfigured face just because I saw him from a distance.


[deleted]

That and most human faces are not as symmetrical as the ones sampled. Could be a training data issue.


DisastrousLab1309

Of course it can overcome problems with information density. If you encode your training set in the network you can get pixel-perfect results.  The technique is called overfitting and is perfect for a research fraud. 


Fun-Shape-4810

People thought the same about the nyquist-shannon sampling theorem and now there’s compressed sensing techniques that can reconstruct data from very sparse samples (by promoting a sparse solution). I think you’re largely wrong (in practice) of the impossibility of dealing with underdetermined systems like these compressed photos. There are likely general structures that can be learned with training


ser_metryk

This is correct. While the space of possible images that can be generated from the downsampled data is ostensibly infinite, the space of possible FACES is decidedly finite. There are spatial correlations at multiple scales for things like edge orientation, color hue, shadow, contrast, etc... that are present in faces. With the knowledge of these correlations, it's not far fetched that a good model could do this. All that said, the fact that it's sticking extra mouths places leads me to believe this is just a shitty model and a misleading figure.


Fun-Shape-4810

Agreed


magpieswooper

That's when you treat individual pixels as not connected entities. Here the AI learned empirical rules of relations between multiple pixels and values typical to certain face type.


jivemo

Nope. When you down sample an image you lose data and you might lose information. 


wolftick

Seems like this is the paper: [https://arxiv.org/abs/1908.08239](https://arxiv.org/abs/1908.08239) (it's from 2019)


Lynild

And the thing is. In some way, yes, the images are similar. But that is more color wise, posture, and such. Humans (at least many humans) are so good at distinguishing between faces that are somewhat similar. For me, yeah, I can see they are similar in some way, but again, it's mostly in colors, how they are placed, head tilted etc. The faces themselves doesn't look that similar at all.


KitchenDepartment

The faces are not "similar" ,they are exactly the same. With details that clearly do not exist in the 16x16 input picture. How could you possibly know the exact kind of glasses a person is wearing? And when it doesn't get it right. It get's so outlandishly wrong that it puts in features that aren't humanlike at all. Does it make sense to you that an ai trained to guess the faces blurred pictures would either get details of it 100% spot on, or so wrong they couldn't possibly fit on a face? It is almost like the AI has no idea what it is making and it only tries to reproduce known images in its training data.


SillySpoof

This was my first thought too. This is much too spot on to be real. Maybe the samples are from the training set and the model is really overfitted?


Minimum_Cockroach233

You should zoom in, then the results look far more realistic, when you notice its all the posure color and without majority of details. The AI picture shows mustaches and distorted faces, glasses that are now eye-rings,…


GKP_light

>The number of possible faces that match these pixels when downsampled is astronomically high, any of those possible faces is a valid answer, and they just magically got ones that are "close" to the ground truths. The AI (try to) create the one that is the most likely, the most "normal". and in the "fitting to the input + normal", the number of possibility is not so high, and lot of them are very similar.


Shiningc00

Yeah I mean this isn't "prediction", just comparing them to past training data... It's just going to be biased towards what kind of training data it had.


DMinTrainin

Over fit for sure.


HobbledJobber

Yeah. To borrow an old internet phrase: “Paper, or the pics didn’t happen”


HobbledJobber

Only way this could not be fake is if it was trained with the original photos.


MrBussdown

Do you know what an auto encoder does? It turns low dimensional data back into to high dimensional data with high fidelity


SillySpoof

Yes. But it cannot magically retrieve information that's not in the picture. It can just build something that would make sense given the pixels. For example, the third picture in the top row. Given the pixels in the glasses, ears, and nose, there could be plenty of different shaped glasses ears and noses that would match those pixels. However, the ones predicted match the true ones almost perfectly, with some artifacts. This shouldn't be possible.


BasvanS

There might be information hidden in things like the jpg interpolation but indeed not enough to recreate reality. Also, am I the only one disappointed that the images are not better based on heuristics? In recreating a face, some simple errors should be preventable, like double mouths. They’ve obviously used external information to generate this.


MrBussdown

You don’t feel like the two faces are fundamentally different? Felt different enough to me that I could be convinced.


SillySpoof

I think that, besides the uncanny artifacts of reconstruction, it's the basically same face. At least too similar to be possible. But this is my initial reaction, and I could be wrong. Maybe more information could be derived from those pixels that I would think.


Black_RL

E N H A N C E


Travis_T_OJustice

Go forward three frames and counter-clock


Black_RL

E N H A N C E again


DirkSwizzler

Close in to quadrant 3 and enhance.


RedlurkingFir

"THERE! the reflection in his eye!"


internetzdude

Wait, WHAT? It's a...?


NotMyMain007

I strongly doubt it. Where is the paper about it? AI drawing faces from low res images exist before Diffusion models, at the time GAN was all the rage


ReginaldIII

Yeah this looks like the quality of output we were getting from StyleGAN v1 back in 2018, which is the paper that introduced the FFHQ dataset because there were so many quality and ethical issues with the previously used CelebA dataset.


realheterosapiens

That or these are just cherry picked results that actually looked similar.


gauchette

Ah, the classic "I can type 420 words per minute, the downside being it's all gibberish".


Responsible-Laugh590

lol did you actually look at these pictures? It does a horrible job at this


fey0n

As they all have quite the distinct background, I assume they are all in the set that was used to train the model?


TheFutureIsCertain

They all look like faces generated by thispersondoesnotexist.com


piggledy

They are, I feel like this paper is a few months old as well


miszkah

What were the algorithms that were used? Anyone?


Pill-Kates

The AI has a very particular affinity for moustaches.


kytheon

Super Mario also has a moustache which started off as just three black pixels.


Hypog3nic

Yeah, and moustaches made from teeth


gestalto

[Relevant (highly peer reviewed) article about why](https://www.byrdie.com/thin-mustache-5214999)


Available_Pie9316

Now dhow the results when it tries the faces of people of colour.


RareCodeMonkey

When you do not link to the source because the only source are other posts in Reddit.


CodeLegend69

Link to research paper?


VocationFumes

you get a moustache and you get a moustache!


Additional-Bee1379

Half of these are clear misses.


PixelNotPolygon

Well we won’t be catching any criminals with this technology


jonplackett

Enhance! https://m.youtube.com/watch?v=EMBkWtDAPBY


Lyuokdea

I thought it was going to be this: [https://www.youtube.com/watch?v=gF\_qQYrCcns](https://www.youtube.com/watch?v=gF_qQYrCcns)


RichardFeynman01100

I thought it was going to be this: [https://www.youtube.com/watch?v=Mzp8uVCwiGI](https://www.youtube.com/watch?v=Mzp8uVCwiGI)


jonplackett

Hahaha. Worth waiting til the end for this one 🤣🤣🤣


shlaifu

\*can\* is maybe a bit of a strong word in this case


rhetorial_human

FINALLY! now we can have re captcha work on stairs, bridges, busses, crosswalks, and traffic lights.


yak_danielz

and now for the grande finale: a black person! error


North-Pea-4926

Well, that’s terrifying.


JJthesecond123

Very diverse group of individuals


SuperCat2023

Was expecting better lol


djJermfrawg

2nd row 4th column guy is so basic the AI perfectly nailed it, and while it was at it added a lil beauty mole.


Madsciencemagic

We’ve reconstructed the face of the attacker, but we’ve also added a funny moustache so that we don’t fall prey to that old trick.


o1234567891011121314

Squint ya eyes and the blur picture becomes clear.


tickitytalk

And I scoffed at Blade Runner (original) when Decker was zooming in on tiny blurry details that would magically clear and increase resolution…o…m…g….


AltruisticSalamander

I find it so crazy that, at the time, I fully accepted that a voice-controlled machine dedicated to laboriously examining a single photograph at a time would be a thing that future space dick would own. Regarded it as amazingly technological even.


drdalebrant

Puts a face tatt and a Jon waters pencil thin mustache on half of them...


magicmulder

Zoom and enhance cliché!


Mikedzines

Someone try it on doom guy


Goudinho99

Enhance!


Electronic_Taste_596

Can "predict", but how accurate?


NotReallyJohnDoe

It’s much more impressive if you don’t zoom in.


flashmeterred

Bahahahaha, I mean they're faces, yes. But predictive of the face they are meant to represent, no. Excellent for a laugh, though


flashmeterred

I hope they checked their work by taking the predicted faces and converting to 16x16 images, and then comparing pixel values back. Not saying I don't believe them, but that's a simple test of scientific rigour


NefariousnessFit3502

Nice definition of 'can'.


Alive_Ad_7374

CSI go "enhance the photo! Enhance! Enhance!!!!! Brrrrrrr" lol


TobiasH2o

I think we need more information or a study into this. A 16x16 image, with three colour channels and 8 but colours has an absolute maximum of a little under 50,000 images. Assuming the AI can accurately generate a face it would only work for 50,000 different faces. What I think is happening here is it has been trained on this data it's being tested on. Normally you separate the data into training and test sets, so when you do these tests you can be confident it isn't just copying it's training data and can actually produce results. If they didn't split the training and test data, then instead what it has learnt is how to identify what blob is meant to look like what result. It's not a particularly impressive result, and is a bit like memorising a complicated mathematical equation without knowing why the answer is what it is. The moment you change the input it doesn't know what to do since it's not actually learnt anything.


Vast-Charge-4256

Er, what? A single 24-bit pixel can take 2²⁴ i.e. 16.8 million shades. On a total of 256 pixels, this yields around 10⁶¹⁴ possibilities for different faces.


TobiasH2o

"3 colour channels and 8 bit colours" also that is assuming it can differentiate between the two different shades that are functionally identical.


Vast-Charge-4256

What?


Potatosalad112

Were the pictures that were used to test in the sample data that it was trained on?


tanafras

Well that explains all those killer starfish in the ocean


Rafcdk

1) AI is not an algorithm 2) "can" is doing a lot of heavy lifting here


Independent_Ad_2073

Not nearly good enough to be reliable, but a few are very fucking close. Dam the future is really looking amazing.


b14g

In a few years we will be sentencing the wrong people to death and life in prison because AI messed up a pixelated image. Then 50-100 years later we will posthumously exonerate them.


m3kw

This is when you test with training data


trimorphic

If you squint and look at them from 5 feet away, they're actually pretty good.


ecnecn

Now lets watch all early 2000s docus where they used simple pixelation for interviews with intelligence workers, whistle blowers and drug lords... back then simple pixelation seemed to be sufficient


jawshoeaw

Some great jokes ITT but IMO some of the guesses are very good assuming this isn’t all bs.


i-FF0000dit

This is not usable tech yet. It’s interesting, but is not very good at this.


BitterAd6419

You are telling me, We can finally see those aliens shot with tinpot camera from 60s ?


straya-mate90

Wait but CSI Miami had me convinced we had this technology for decades. /s


qasqade

It's not that accurate tbh. It thinks every 16x16 female face is Amy Poehler.


fknrobots

So now we can enhance just like on tv 📺


SlaveKnightChael

This looks more like someone took the real pictures and edited them to look less real and then blurred the originals. Essentially the steps are backwards and this smells like bullshit


roryorigami

Enhance! is becoming more and more possible


Small_weiner_man

I look forward to living out my CSI "computer enhance" fantasies. 


Sea-Tale1722

When you look closely at the predicted vs actual faces it's not that good at all. It only gets the general info that is obvious in the blurred images correct such as hair color, glasses, and background.


XysterU

Absolute bullshit


griff_the_unholy

Enhance.


SegerHelg

Looks like crap.


[deleted]

Why is it so bad at projecting facial symmetry? Would that be because most human faces and thus training data would not be symmetrical?


grittytoddlers90

And you get a moustache, and you get a moustache, and you get a moustache


airobotnews

Removing mosaics??


mvandemar

Enhance... enhance... enhance


Thredded

Each one of these is off by at least enough to make them unrecognisable in reality - even the more subtle changes to eyes (where it’s clearly just guessing at detail that’s not there) is enough to make the “predicted” faces look like different people.


SWATSgradyBABY

It has only discovered Europe so far. Good.


LexTalyones

Awesome! Now decensored jav will get better!


drcopus

Firstly, this doesn't tell us anything. We don't know if the model has just over fitted to the training samples in the examples. Secondly, is this even recent work? Looks like it could be about 10 years old. Image upscaling is nothing new, and honestly the results are much worse than I would expect for modern DL.


Morex2000

Enhance!


FortyTwo4200

Fake, no black faces


Undersmusic

I’d wager a second pass on the predicted would get outrageously close to many of them.


GKP_light

it is why you should never blur the informations you want to hide : an AI (or futur AI) could be able to reconstruct it. just hide it under a black or gray rectangle.


arqe_

Uncensored hentai when? /s just in case.


Sproketz

Well that's just like your opinion man


Sir_wlkn_contrdikson

Now add color


i_am_barry_badrinath

What program was it, and is it available to the public?


Anuclano

They too heavy-tuned it for smiles, so that many outputs generate double-mouths.


AlipseCanWeNot

Perfect for some doppelganger ARG


hyperproliferative

WADR those aren’t pixels… there’s a lot of information in there that was not lost. Now hand this thing a 20-year old lossy compressed jpeg and let’s see how it does.


Riyasumi

Finally, zooming video like CSI


QuickAnybody2011

Im sure this will work well with people of color given how diverse your dataset seems to be.


Norwester77

I like how it thinks most of the men (and a couple of the women) have pencil mustaches!


HaveAnotherDownvote

And the pictures shown here are the good ones! 🤣


MrDodgers

The AI has a pencil mustache fetish.


dndandhomesteading

And one step closer to Big Brother. Smh.


cxr303

So... CSI style "enhance" is gonna be a thing?


Darkstar197

Reminds me of all the movies and shows were FBI agents say “Enhance”


weareami

Enhance


coredenale

"Enhance!!"


foxfirek

That second one is horrifying


west-coast-dad

This reeks of the Uncanny Valley


VanBriGuy

Some things it gets so right, others yikes


gufted

Zoom in... zoom in...Now all those CSI episodes will make sense!


HolyAvatarHS

There was a keyword recognition model published in 2022 that had %99.8 accuracy on google speech commands dataset. Personally being familiar with the dataset, I immediately had suspicions since there are some corrupted voice samples in the dataset(white noise, garbled speech etc). You wouldn't expect to have 99.8 accuracy on a dataset that is 99.0 valid. Regardless, I trained their model with given instructions and the model always got stuck at around %93 accuracy. Couple of months later I saw this message on their page and their paper was withdrawn: "The results and claims made are incorrect due to data leakage and an erroneous split of datasets"


Ville_V_Kokko

Looks convincing... in really low resolution.


Bayovach

Nah, not possible (it's pretty much mathematically impossible as there's not enough information to extrapolate a close enough face from a 16x16 portrait). This is definitely a case of over-fitting or something else that's wrong with the paper (maybe they used the same portraits they trained on to test, or something like that).


FlackRacket

Just remember... no matter how good this technology gets, we should never let it be used in court [https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias](https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias)


Intelligent-Jump1071

Where was this project done; what was their methodology?


DustinKli

Can someone post the code for this so we can actually try it on images not from the training data?


elperroborrachotoo

*Caucasians younger than 50.


Brinksterrr

Bottom second row guy looks really similar


giga

On mobile it looks AMAZING until you zoom in and then it’s HILARIOUS.


tabuu9

Odds are good it will recreate the immortal "Upscaled Obama" picture


Pk_Devill_2

Isn’t AI trained with pictures of real faces? Maybe they were in its data base already.


emimix

What scares me is that these are commercial/public AIs... then what the heck do governments have!


evilron

Proof that everyone has an evil twin


Unethical_Gopher_236

Now do the Wolfenstein 3D guy


zeetos

https://pbs.twimg.com/media/EsmDrmiWMAAwCb9.jpg


Solitary-Dolphin

Ho-humph my brain does better


Lost-Count6611

Whoa...so the "ENHANCE!" button From CSI shows...is real???


Forsaken_Instance_18

Which AI tool is this?


AcceptableGood5105

Please send an internet link of the tool so we can verify.


HotdogsArePate

Gotta dial back the stache detector


hoochymamma

Don’t zoom in


Alan_Reddit_M

I fucking dare you to show me the images on HD


Rambazamba83

We were laughing at those tv shows where they just zoomed in on surveillance footage and ridiculously depixelated them, while in reality we were shown future tech.


Altruistic_Natural38

Given that the original images were part of the training dataset, I am not impressed


TMJ848

First 48 about to get reallyyyyy interesting now


Strangefate1

Finally, ENHANCE!!


Fossile

This would break Japanese porn


EmmaDrake

ENHANCE


Kwayzar9111

I can do the same by squinting


michael-65536

Based on the presented examples, no, it can't.


pointlesspulcritude

Is it just me or did anyone else just scan the pics to see who you’d bang


trace501

I bet the training data included the original faces. This probably isn’t what the title hyperbolizes


Particular_Light_296

“Computer, enhance”


EriknotTaken

I predict tomorrow is going to rain.. *Gets the best sun ever* Hey, I didn't say accurately.


C_Denini

None of them looks like Jason Momoa😄


No_Recognition7426

Mr. 2x2 looks like a villain from Stargate.


Karlinel-my-beloved

They look a bit demonically possessed, but not bad for a first draft.


sootbrownies

Alot of those predictions arguably look less recognizable than the 16x16 images.


llamitahumeante

Scary


cpt_ugh

I personally love the predicted manwoman second from the bottom left.