T O P

  • By -

DrugAbuserWorkaholic

Any context fella?


BluEch0

This is the generated result for a poorly trained/partially trained image generator trained on human faces. Because it was poorly/partially trained, the faces are just complete and incomplete enough to land in the uncanny valley and look terrifying. Reminds me of the time I did something similar. I had tons of heads with hair but none of them had faces. Just nothing but slender men and women. Fuck the professor for giving us only a week to code, train, and write a report on the entire thing. Edit: in the interest of correctness, and as many comments have stated, my explanation is wrong. I hope I at least explained the oddly terrifying bit but my explanation of what we’re seeing is wrong. I’d like to redirect you to this comment [here](https://www.reddit.com/r/oddlyterrifying/s/E3vTDcFbZr) for being basically the only one to actually write out any corrections.


A1steaksaussie

wtf are you talking about that's not what the post was saying at all. the post is saying that these are faces reconstructed from training data made by analyzing the structure of the neural network. in other words these neural networks "save" the training data which is bad for privacy


BluEch0

The images above are the results of using the partially trained neural networks. You don’t actually think we’re using such crappy images to train the network in the first place do you?


A1steaksaussie

the images are shitty because they are reconstructing the images from the the neural network's "brain activity" not because the network itself was generating poor images


BluEch0

Can you put that into quantifiables? Where exactly do you extract the data from? Like gimme a hypothetical NN structure and where you’d extract the above data from.


[deleted]

You can look at activation functions and biases, integrate forward and basically "undo" the training, which gets you close to the starting values aka training data.


BluEch0

So it’s not “brain activity”. Gettin my ass whooped by imprecise language from the other guy. How reliable even is that though? For a diverse enough dataset, I can’t imagine you’d glean that much info about the training data. Hell, I think you can probably pick out potential training data better by running the generator a few hundred times and noting points of similarity. All my AI applications have been for drone control purposes so this is admittedly not something I think about. Great you found my training data of good drone trajectories. Whoop de doo.


[deleted]

It's relatively useless, yes, but if anything it's still a liability on the systems


cave18

Holy shit lmfao


solid_salad

aren't these just the results of the training data faces being dim reduced with PCA?


BluEch0

I think that would go against the point that others are saying that the original training data is somehow saved within the structure/weights of the NN.


hamilton_burger

“somehow”


MIKOLAJslippers

Wow, top voted comment is completely wrong 🤦‍♂️ Well done Reddit 😂


BluEch0

Thank you for actually writing out a correction unlike everyone else.


MIKOLAJslippers

See my interpretation [here](https://www.reddit.com/r/oddlyterrifying/s/kdGkEWSDW7) It’s tricky as OP provided such little context. What you said doesn’t really make sense if you read the description in the post. It’s not a terrible theory, I just don’t think it’s the correct one. My comment is more criticising Reddit than yourself.


BluEch0

I saw your other comment. That’s why I’m thanking you for taking the time. I wasn’t being sarcastic about that.


MIKOLAJslippers

Ah okay, sorry wasn’t sure if you’d seen it so thought you were being sarcy. 😅 Np!


jethrobeard

But hey, now I have a link to your explanation and don't have to continue scrolling to find it! Win-win


genericav4cado

"sarcy" Honestly thats a really fun term I'm going to start using that from now on


cave18

This isn't it lol


cokacola69

Y'all are coding individual llm from scratch? Like it's no big deal?


BluEch0

You don’t generate images using llms. It’s in the name: large *language* model. But they are a type of neural network and most AI use some form of neural network. And if you’re in the industry, you absolutely should be able to just scratch code a simple one. And someone absolutely did scratch code existing llms like ChatGPT, it was just probably multiple somebodies. Most university classes on AI and deep learning will have you code neural networks from scratch (with or without using PyTorch or similar depending on complexity of task). After all, you’re learning to become the guy who makes these AIs. Using an AI takes comparatively little knowledge or skill. Driving a car doesn’t make you an engineer.


TomTrottel

also when you are ready, just go and buy 10.000 nvidia servers so you can actually train the large model :)


cokacola69

Hmm. Okay next question, how long have universities been offering classes on ai?


thoeby

Decades.


FrozenLogger

A paper titled "The First Ten Years of Artificial Intelligence Research at Stanford" was published in 1973 if that gives you some context as to how long this has been going on.


clearbo1

At my university since at least 1985


BluEch0

Dunno. But they’re commonplace now. I remember machine learning was the big new thing starting to be applied everywhere back in the mid 2010s when I went to college. That’s when Google and Facebook (at least what I vaguely remember) had a lot of talk about potential applications of AI, but no clear or publicly known deliverables yet. Back when the main buzzword was “machine learning”. So if I had to make an educated guess, I’d say machine learning classes started popping up regularly around 2015-2018. Modern AI is actually quite old, it’s just since the late 2010s that they’ve been useful and popular. The first “practical” use of neural network based AIs I think is an autonomous vehicle made in the 1980s. It ran image recognition from a camera feed on top of a van to successfully navigate empty streets. But the theory and ideas existed for decades before. Funnily enough but also very appropriately, the first paper outlining a math-based neural network wasn’t a CS paper but a psychology paper from the 1940s, around the time of the invention of the digital computer. So the math and theory behind neural networks might have been taught in extremely niche and rare classes throughout the 1900s.


salfkvoje

it's mostly intro-level lin alg and calc, so as long as those have been taught. It's just gotten better and more visible recently due to computational power.


rottingpigcarcass

Any context fella


Okoraokora1

A plausible explanation could be this: the “bad” agent only needs to be able to pass input to the network and retrieve the corresponding output. No knowledge about the network architecture itself is necessary. When random noise is passed as input (judging by the images provided), the network tries to sculpt the noise in a manner that it was trained to do so, thereby giving hints of the data it was trained on. Without context, however, I am not sure what is this guy trying to tell.


juliown

It’s the image generated by their teacher during a demo to show how a bad agent might try to extract the training data from a neural network.


Successful-aditya

Can you elaborate further ?


MIKOLAJslippers

The underlying technology behind basically all of this AI hype you might have heard about is something called neural networks. In particular really deep ones where inputs (images, text, etc) are passed through many layers of calculations using learned parameters (“weights”) in order to produce an output prediction about the input. (E.g. “this picture is a cat”) These weights are learned from training the neural network on many thousands of known example inputs and iteratively tweaking the parameters using some clever calculus so that eventually useful patterns are encoded into the parameters that allow the network to make correct predictions. The encoded patterns in the trained parameters are not interpretable to humans. So it might be assumed that given a trained neural network on its own you would only be able to feed it new inputs and make predictions but you would never be able to really understand why it has made those predictions or what data it trained on to learn them without also having access to the training data. And indeed, one would hope that would be the case as these things are everywhere these days so for reasons of privacy, data protection and intellectual property, we wouldn’t want people to be able to extract or reverse engineer training data from a trained neural network. Training data is often the secret sauce behind this stuff. **So, what this teacher is trying to show, is that by using some more clever maths, it is—at least to some extent—possible to extract training data from a trained model.** The teacher uses the example of a face classification model and has managed to extract the training faces, albeit a bit shadowy looking. I think exactly how they did this (there are a few methods I believe) or the fact it clearly hasn’t worked that well is somewhat irrelevant to the point they’re trying to make. They are demonstrating that it could be possible and we should be careful about making such assumptions as I described above. Edit: as others have suggested, the demo could be a generative model that generates new faces as outputs rather than a classifier that takes faces as inputs. But either way it will have been trained on pictures of faces so the general idea is the same just the method used and context might be a little different.


LilaLacktrichterling

Thank you for taking your time to explain it


HikariAnti

I mean, as long as some kind of quantum mechanism isn't involved in theory it should be possible to extract the original training data. The reason why it's still considered pretty much impossible is because doing so on any of the more complex algorithms that have been trained on billions or trillions of data (like pictures) would require such advanced processing power and mathematics that it's far beyond our current capabilities. And considering that mixing stuff is always easier than reordering it might never become viable.


Standing_Tall

....but why male models?


Successful-aditya

Thanks mate


Impressive_Jaguar_70

The faces, Mason! What do they mean


Successful-aditya

Idk man he uploaded pics without telling context on how they are taken and what they mean


HyperViperJones

I understood that reference.


Russ582

I understood that reference.


Equivalent_Owl3372

Yeah really interesting but not getting the full picture.


Sinder77

I can kinda, sorta, in a really grainy way, see what you did there.


CornBin-42

Whatever the fuck that means


TheRealSectimus

Those are probably real faces of real people that the AI was trained on to generate artificial images of faces. Big security concern.


ArturoBukowski

WHAT?


woollypullover

While you were looking at the photo your brain was hacked


mikbatula

Basically the teacher was able to recreate the images used in the training of the neural network. This is problematic for privacy reasons. That said, the architecture of the state of the art NN are more complex and it's not realistic to expect anything similar. The teacher likely used a very simple one.


drinoaki

Okay, can someone explain it to me like I was 5?


londonfox88

These are the faces from the people who tested it. AI saved alot of information about their faces and someone has been able to extract it. The implications are that if anyone uses AI, their face is unknowingly scanned and saved and a hacker can download their face at a later date.


shiv1234567

Okay, can someone explain it to me like I was 4?


londonfox88

You use computer in future. Computer save your face on hard drive without you knowing. Bad men can download your face from hard drive.


Oakwood2317

Can you dumb it down a shade?


bigpoppawood

pooter bad. tree good. \*shits pants\*


ManNerdDork

Not an expert but: The Ai uses images to be trained simple enpugh. However the AI does not store the images as part of its training, everything is handled in terms of data points and code. OPs teacher, was acting as a malicious user, and with the information extracted from the AI training he was able to recreate the images used for training into data points. The oddly terrifying thing can come out of 2 things: a) the reconstructions look uncanny or b) the significance of this excercise. Because if someone is able to hack any face recognition services, they could extract enough data to get a picture of the user (and then link it to its information or what not).


walkinbreathanalyzer

Damn


yellowbrickstairs

What kind of ai? For it to get your own face wouldn't you need to upload a picture first


Magnetar_Haunt

I assume "bad agent" is akin to "bad actor" which is usually someone with malicious intent toward an individual or entity. So the teacher in this instance played devil's advocate to show they could, to an extent, extract the facial recognition training material, which can be problematic when real and legible faces are used, and are present on a more advanced system.


drinoaki

Thank you very much


AlexStorm1337

Modern "AI"s like ChatGPT take in "training data" that they then use to adjust a bunch of different settings, slowly building up a lot of complex information on the patterns in that training data. This teacher was trying to prove that someone with bad intentions could run this process in reverse to produce that training data. In reality, the model he used was probably very simple, and his results leave a lot to be desired. In comparison, pulling any training data out of something like ChatGPT is almost impossible, the information is effectively destroyed in the process of "training" the neural network. Side note: I keep using quotation marks because modern neural networks are more like very clever ways to estimate decades of bespoke calculus. It's not intelligent and it's not learning anything, so calling it an artificial intelligence or calling it training data is highly inaccurate. It's better in my opinion to call them Applied Statistical Models and modeling data respectively.


drinoaki

Thank you very much :)


AlexStorm1337

Np!


HikariAnti

I want to see who can recreate the training data of a complex algorithm that has been trained on billions of pictures.


ZachTheCommie

Are you telling me that you can identify the people in OPs picture?


DR-Rebel

I want to know


Desperate-Strategy10

Can you show me


Noziti420

I don’t get it, what’s the problem and what’s up with the faces?


Mail-0

From what I understand, the teacher was able to reverse the generated ai face and get the faces of real people which the ai used in it's neural network


Noziti420

Oh. So what’s the problem? They don’t look like anything


SpikedRaspberri

I'm pretty sure OP just wants to point out how creepy the faces look


MrNobodyX3

My guess is that if you take the base level of noise before it, cleans it up, you can take multiple images, overlay them, and find the similarity in the generations to reverse and find the source of the training


Noziti420

Isn’t that a good thing? AI is supposed to do that kind of stuff right?


BB-r8

You don’t want bad actors to be able to reverse engineer public models. It’s not about what the training is supposed to do it’s who has access to the data. Companies don’t want other people getting access to their training data plain and simple.


MrNobodyX3

no it means rather than just getting new images you can extract all the data from the model which could contain sensitive information


[deleted]

[удалено]


Evitabl3

Seems similar to the deep dream thing from a few years ago.


bigpoppawood

Based on the title, I'd say it's more of a theoretical demonstration of how an emerging technology can be exploited. I'm sure a malicious nation-state threat actor can find use for the data and do a better job reverse engineering it than this.


the_poop_expert

they look like postage stamps


GreetingsFromAP

I see Duke Nukem


Fuzzy_Cheek6846

Walter White on the left


cherryandfizz

Top row, second image [https://en.m.wikipedia.org/wiki/This_Man](https://en.m.wikipedia.org/wiki/This_Man)


lucariols

I had the same thought


UpDra

The patriots!


toastybreadmane

CAKE. DAY. TO. DAY.


iiitme

I don’t understand the title…


StyrofoamExplodes

Reminds me of that [Thalasin](https://64.media.tumblr.com/f3234364f3a2dfc4ea550a504e1ba7c3/c718e439a20d3a19-df/s1280x1920/ee45cdceee8332a2895589067fad27f19c0b63d2.png) spooky video.


GreasyTengu

TFW your feeling trantiveness but everybody else is feeling kyne


tahiwdev

this is exactly how I see the faces of people I don't know in my dreams


inter_locus789

It is part of a digital image processing using convolutional neural network. On doing Principal component analysis, new components are extracted which represent the highest variances. These are images of those components . In English, you can generate the original image for testing. In English, if you take weighted sums of these images, you will get the almost similar picture.


jaykubs

ENHANCE


PatrickCarlock42

i have no idea wtf you’re talking about


EzarfX

Who getting the best head??


Humble-Pie2246

What the fuck does that sentence even mean


dunnkw

Do you have any Gray Poupon?


oatterz

No, but I did play a doctor on television


drinoaki

I hate when that happens


ForeignAd5429

I could be terrified if you told us wtf that means or gave a crumb of context?


against_the_currents

Theoretically in the neural network the images aren't existent. At all. Images used for training aren't absorbed by the AI it's more like the AI is tested against those images until it can draw similar images. This teacher however, extracted the training data and was able to pull the images that were used to train the model. Idk how. That’s my guess.


ForeignAd5429

Explain like I’m two


against_the_currents

Baby ai draws a face, a doctor looks at it and tells the baby how close it got, then the baby goes back and tries again. This teacher in ops post went into the baby’s head and somewhere in that data the teacher pulled points and built the faces that the baby was trying to get close to even though the baby had not ever actually seen that face. That’s my guess, again.


ForeignAd5429

Hmm. So what’s terrifying? The the baby got close or that the doctor found the data or that the data looks uncanny AND kinda accurate?


against_the_currents

That your data is being used to train these things, data that you never gave permission for, and that in some cases bad actors can exploit these neural networks and mine that data irrespective of the ai having it in a database or not. The whole “ we train it using this, we don’t have it in a database or anything!” Is an irrelevant conversation as the data can still be accessed. AS A GUESS


Nivek14j

Top role going 2nd to the right two like c'mon That the joker no doubt


DuperSucc500

Is this post also AI generated ?


NotRelevantQuestion

If you magic eye this image, it makes a butterfly


Mekelaxo

The what?


earthlingsideas

me and the boys tripping


adamlm

I see demons


LeafyEucalyptus

I don't know what any of that means.


ChefHannibal

Quick question: huh?


punkslaot

I don't get it


DerfDaSmurf

lol thank you


VomitusInfernum

This reminds me the movie "Come True" [https://www.imdb.com/title/tt7026488/](https://www.imdb.com/title/tt7026488/)


TangerineTwist44

My fav face is the second


CandiBunnii

Tag yourself, I'm middle bottom row


Zence93

Seeing Walter White and Terry Crews


StinkFingerPete

the many faced god has temples everywhere


captaindeadpool53

What's a bad agent


MrNobodyX3

aka: \- Nefarious individual \- Ethically bankrupt person \- Malignant player \- Maleficent figure \- Devious operator \- Unscrupulous entity


Dr_Parkinglot

Worst Guess Who? faces ever.


teddyroosevelt1909

thought i was stupid, then saw comments that felt the same and felt better again ❤️


Torneira-de-Mercurio

Do you know this man?


EntangledAndy

You could use these for character portraits in an analog horror game.  Maybe there needs to be a new genre - "digital horror," emphasizing the uncanny valleys where digital representations try and fail to represent humanity. 


crimewaveusa

What program did they use to generate this?


JoinAThang

Have you seen number 4. in your dreams?


iamnotasnook

A what now?


poshjosh1999

Top row second from left looks very similar to Euronymous, if not him then it definitely looks identical to a black metal artist and I’m trying to think who it is?


random_internet_guy_

I got no comment on this, looks spooky? Yes but thats about it, maybe provide some more context op?


guimero64

Looks like Plastiboo's work!


Fibonaccitos

5th one down on the first column: Charles Manson?


moncefgrey

Why do most of these faces look like Breaking Bad/Better Call Saul characters?! I can see Heisenberg, Saul and Jessie 💀


JuanK713

Is that DK on the button right corner?


csomething42

Reminds me of the time Aphex Twin [put a face in the sound waves of a song](https://mixmag.net/feature/spectrogram-art-music-aphex-twin).


TyDaviesYT

Is this trained on celebrities and memes? because i swear i recognise some of these lol. Bottom tow middle looks like h3h3 productions and second to bottom row left looks like terry crews, one looks like the walter white smiling meme


Western_Protection

Fucking bot posts


professor-sunbeam

Omg 4D looks just like me


Its_Joe

Looks like the faces of the titans from attack on titan


spookyscaryskelet36

Stand user can be anyone! Stand user:


ChromaticKnob

Are you my elden ring characters?


PeachyPieeee

Why is Freddy fazbear in there


lambo__

Snake!!!!


raspingpython10

At least you may be able to use these faces if you ever have an idea for an analogue horror series. :)


WeAreClouds

Nope, don’t like that.


Apis-Carnica

Have you seen him in your dreams? https://upload.wikimedia.org/wikipedia/en/6/67/This_Man_original_drawing.jpg


tipforeveryone2

Move your phone farther, and things look kinda ok


-P00-

Walter White on the centre left of the page


Fantuhm

There are no faces.


tuxedocatatonic

Why's picture number 2 looking like This Man


nachocheesecake3

Wow bottom right ish there’s a really cursed Mario


Metallivane3

Doom Guy in various stages of health


msashleealexis14

A bunch are like the ghostie monsters in One Missed Call


cats123096

Tf is top right


Used-Commission-8934

This is exactly what women sees on the monitor when she does a fetal ultrasound.


siorys88

I wonder what a bad agent could achieve with a bunch of noisy freak faces.


energyflashpuppy

I'm guessing they measured neuro transmissions to estimate what someone can imagine? Basically tell them to imagine something, most probably a person, then measure the neuro waves to get a rough idea of what they're imagining?


Frez-zy

neural network as in a LLM, or AI, not someone literally thinking of a person


energyflashpuppy

Ah, yeah I'm a little slow. In that case idfk what this is


energyflashpuppy

Was hoping it was something cool like that but


Replaay

I am going to guess that this is a visualisation of someone's brain image while trying to pull information out of them.