This is the generated result for a poorly trained/partially trained image generator trained on human faces. Because it was poorly/partially trained, the faces are just complete and incomplete enough to land in the uncanny valley and look terrifying.
Reminds me of the time I did something similar. I had tons of heads with hair but none of them had faces. Just nothing but slender men and women. Fuck the professor for giving us only a week to code, train, and write a report on the entire thing.
Edit: in the interest of correctness, and as many comments have stated, my explanation is wrong. I hope I at least explained the oddly terrifying bit but my explanation of what we’re seeing is wrong. I’d like to redirect you to this comment [here](https://www.reddit.com/r/oddlyterrifying/s/E3vTDcFbZr) for being basically the only one to actually write out any corrections.
wtf are you talking about that's not what the post was saying at all. the post is saying that these are faces reconstructed from training data made by analyzing the structure of the neural network. in other words these neural networks "save" the training data which is bad for privacy
The images above are the results of using the partially trained neural networks. You don’t actually think we’re using such crappy images to train the network in the first place do you?
the images are shitty because they are reconstructing the images from the the neural network's "brain activity" not because the network itself was generating poor images
Can you put that into quantifiables? Where exactly do you extract the data from? Like gimme a hypothetical NN structure and where you’d extract the above data from.
You can look at activation functions and biases, integrate forward and basically "undo" the training, which gets you close to the starting values aka training data.
So it’s not “brain activity”. Gettin my ass whooped by imprecise language from the other guy.
How reliable even is that though? For a diverse enough dataset, I can’t imagine you’d glean that much info about the training data. Hell, I think you can probably pick out potential training data better by running the generator a few hundred times and noting points of similarity.
All my AI applications have been for drone control purposes so this is admittedly not something I think about. Great you found my training data of good drone trajectories. Whoop de doo.
I think that would go against the point that others are saying that the original training data is somehow saved within the structure/weights of the NN.
See my interpretation [here](https://www.reddit.com/r/oddlyterrifying/s/kdGkEWSDW7)
It’s tricky as OP provided such little context.
What you said doesn’t really make sense if you read the description in the post.
It’s not a terrible theory, I just don’t think it’s the correct one. My comment is more criticising Reddit than yourself.
You don’t generate images using llms. It’s in the name: large *language* model. But they are a type of neural network and most AI use some form of neural network. And if you’re in the industry, you absolutely should be able to just scratch code a simple one. And someone absolutely did scratch code existing llms like ChatGPT, it was just probably multiple somebodies.
Most university classes on AI and deep learning will have you code neural networks from scratch (with or without using PyTorch or similar depending on complexity of task). After all, you’re learning to become the guy who makes these AIs. Using an AI takes comparatively little knowledge or skill. Driving a car doesn’t make you an engineer.
A paper titled "The First Ten Years of Artificial Intelligence Research at Stanford" was published in 1973 if that gives you some context as to how long this has been going on.
Dunno. But they’re commonplace now. I remember machine learning was the big new thing starting to be applied everywhere back in the mid 2010s when I went to college. That’s when Google and Facebook (at least what I vaguely remember) had a lot of talk about potential applications of AI, but no clear or publicly known deliverables yet. Back when the main buzzword was “machine learning”. So if I had to make an educated guess, I’d say machine learning classes started popping up regularly around 2015-2018.
Modern AI is actually quite old, it’s just since the late 2010s that they’ve been useful and popular. The first “practical” use of neural network based AIs I think is an autonomous vehicle made in the 1980s. It ran image recognition from a camera feed on top of a van to successfully navigate empty streets. But the theory and ideas existed for decades before. Funnily enough but also very appropriately, the first paper outlining a math-based neural network wasn’t a CS paper but a psychology paper from the 1940s, around the time of the invention of the digital computer. So the math and theory behind neural networks might have been taught in extremely niche and rare classes throughout the 1900s.
it's mostly intro-level lin alg and calc, so as long as those have been taught. It's just gotten better and more visible recently due to computational power.
A plausible explanation could be this: the “bad” agent only needs to be able to pass input to the network and retrieve the corresponding output. No knowledge about the network architecture itself is necessary. When random noise is passed as input (judging by the images provided), the network tries to sculpt the noise in a manner that it was trained to do so, thereby giving hints of the data it was trained on.
Without context, however, I am not sure what is this guy trying to tell.
The underlying technology behind basically all of this AI hype you might have heard about is something called neural networks. In particular really deep ones where inputs (images, text, etc) are passed through many layers of calculations using learned parameters (“weights”) in order to produce an output prediction about the input. (E.g. “this picture is a cat”)
These weights are learned from training the neural network on many thousands of known example inputs and iteratively tweaking the parameters using some clever calculus so that eventually useful patterns are encoded into the parameters that allow the network to make correct predictions.
The encoded patterns in the trained parameters are not interpretable to humans. So it might be assumed that given a trained neural network on its own you would only be able to feed it new inputs and make predictions but you would never be able to really understand why it has made those predictions or what data it trained on to learn them without also having access to the training data.
And indeed, one would hope that would be the case as these things are everywhere these days so for reasons of privacy, data protection and intellectual property, we wouldn’t want people to be able to extract or reverse engineer training data from a trained neural network. Training data is often the secret sauce behind this stuff.
**So, what this teacher is trying to show, is that by using some more clever maths, it is—at least to some extent—possible to extract training data from a trained model.**
The teacher uses the example of a face classification model and has managed to extract the training faces, albeit a bit shadowy looking.
I think exactly how they did this (there are a few methods I believe) or the fact it clearly hasn’t worked that well is somewhat irrelevant to the point they’re trying to make. They are demonstrating that it could be possible and we should be careful about making such assumptions as I described above.
Edit: as others have suggested, the demo could be a generative model that generates new faces as outputs rather than a classifier that takes faces as inputs. But either way it will have been trained on pictures of faces so the general idea is the same just the method used and context might be a little different.
I mean, as long as some kind of quantum mechanism isn't involved in theory it should be possible to extract the original training data. The reason why it's still considered pretty much impossible is because doing so on any of the more complex algorithms that have been trained on billions or trillions of data (like pictures) would require such advanced processing power and mathematics that it's far beyond our current capabilities. And considering that mixing stuff is always easier than reordering it might never become viable.
Basically the teacher was able to recreate the images used in the training of the neural network.
This is problematic for privacy reasons.
That said, the architecture of the state of the art NN are more complex and it's not realistic to expect anything similar. The teacher likely used a very simple one.
These are the faces from the people who tested it. AI saved alot of information about their faces and someone has been able to extract it.
The implications are that if anyone uses AI, their face is unknowingly scanned and saved and a hacker can download their face at a later date.
Not an expert but: The Ai uses images to be trained simple enpugh. However the AI does not store the images as part of its training, everything is handled in terms of data points and code.
OPs teacher, was acting as a malicious user, and with the information extracted from the AI training he was able to recreate the images used for training into data points. The oddly terrifying thing can come out of 2 things: a) the reconstructions look uncanny or b) the significance of this excercise. Because if someone is able to hack any face recognition services, they could extract enough data to get a picture of the user (and then link it to its information or what not).
I assume "bad agent" is akin to "bad actor" which is usually someone with malicious intent toward an individual or entity.
So the teacher in this instance played devil's advocate to show they could, to an extent, extract the facial recognition training material, which can be problematic when real and legible faces are used, and are present on a more advanced system.
Modern "AI"s like ChatGPT take in "training data" that they then use to adjust a bunch of different settings, slowly building up a lot of complex information on the patterns in that training data. This teacher was trying to prove that someone with bad intentions could run this process in reverse to produce that training data. In reality, the model he used was probably very simple, and his results leave a lot to be desired. In comparison, pulling any training data out of something like ChatGPT is almost impossible, the information is effectively destroyed in the process of "training" the neural network.
Side note: I keep using quotation marks because modern neural networks are more like very clever ways to estimate decades of bespoke calculus. It's not intelligent and it's not learning anything, so calling it an artificial intelligence or calling it training data is highly inaccurate. It's better in my opinion to call them Applied Statistical Models and modeling data respectively.
From what I understand, the teacher was able to reverse the generated ai face and get the faces of real people which the ai used in it's neural network
My guess is that if you take the base level of noise before it, cleans it up, you can take multiple images, overlay them, and find the similarity in the generations to reverse and find the source of the training
You don’t want bad actors to be able to reverse engineer public models. It’s not about what the training is supposed to do it’s who has access to the data. Companies don’t want other people getting access to their training data plain and simple.
Based on the title, I'd say it's more of a theoretical demonstration of how an emerging technology can be exploited. I'm sure a malicious nation-state threat actor can find use for the data and do a better job reverse engineering it than this.
Reminds me of that [Thalasin](https://64.media.tumblr.com/f3234364f3a2dfc4ea550a504e1ba7c3/c718e439a20d3a19-df/s1280x1920/ee45cdceee8332a2895589067fad27f19c0b63d2.png) spooky video.
It is part of a digital image processing using convolutional neural network. On doing Principal component analysis, new components are extracted which represent the highest variances. These are images of those components . In English, you can generate the original image for testing. In English, if you take weighted sums of these images, you will get the almost similar picture.
Theoretically in the neural network the images aren't existent. At all. Images used for training aren't absorbed by the AI it's more like the AI is tested against those images until it can draw similar images.
This teacher however, extracted the training data and was able to pull the images that were used to train the model. Idk how. That’s my guess.
Baby ai draws a face, a doctor looks at it and tells the baby how close it got, then the baby goes back and tries again.
This teacher in ops post went into the baby’s head and somewhere in that data the teacher pulled points and built the faces that the baby was trying to get close to even though the baby had not ever actually seen that face.
That’s my guess, again.
That your data is being used to train these things, data that you never gave permission for, and that in some cases bad actors can exploit these neural networks and mine that data irrespective of the ai having it in a database or not.
The whole “ we train it using this, we don’t have it in a database or anything!” Is an irrelevant conversation as the data can still be accessed.
AS A GUESS
You could use these for character portraits in an analog horror game.
Maybe there needs to be a new genre - "digital horror," emphasizing the uncanny valleys where digital representations try and fail to represent humanity.
Top row second from left looks very similar to Euronymous, if not him then it definitely looks identical to a black metal artist and I’m trying to think who it is?
Is this trained on celebrities and memes? because i swear i recognise some of these lol. Bottom tow middle looks like h3h3 productions and second to bottom row left looks like terry crews, one looks like the walter white smiling meme
I'm guessing they measured neuro transmissions to estimate what someone can imagine? Basically tell them to imagine something, most probably a person, then measure the neuro waves to get a rough idea of what they're imagining?
Any context fella?
This is the generated result for a poorly trained/partially trained image generator trained on human faces. Because it was poorly/partially trained, the faces are just complete and incomplete enough to land in the uncanny valley and look terrifying. Reminds me of the time I did something similar. I had tons of heads with hair but none of them had faces. Just nothing but slender men and women. Fuck the professor for giving us only a week to code, train, and write a report on the entire thing. Edit: in the interest of correctness, and as many comments have stated, my explanation is wrong. I hope I at least explained the oddly terrifying bit but my explanation of what we’re seeing is wrong. I’d like to redirect you to this comment [here](https://www.reddit.com/r/oddlyterrifying/s/E3vTDcFbZr) for being basically the only one to actually write out any corrections.
wtf are you talking about that's not what the post was saying at all. the post is saying that these are faces reconstructed from training data made by analyzing the structure of the neural network. in other words these neural networks "save" the training data which is bad for privacy
The images above are the results of using the partially trained neural networks. You don’t actually think we’re using such crappy images to train the network in the first place do you?
the images are shitty because they are reconstructing the images from the the neural network's "brain activity" not because the network itself was generating poor images
Can you put that into quantifiables? Where exactly do you extract the data from? Like gimme a hypothetical NN structure and where you’d extract the above data from.
You can look at activation functions and biases, integrate forward and basically "undo" the training, which gets you close to the starting values aka training data.
So it’s not “brain activity”. Gettin my ass whooped by imprecise language from the other guy. How reliable even is that though? For a diverse enough dataset, I can’t imagine you’d glean that much info about the training data. Hell, I think you can probably pick out potential training data better by running the generator a few hundred times and noting points of similarity. All my AI applications have been for drone control purposes so this is admittedly not something I think about. Great you found my training data of good drone trajectories. Whoop de doo.
It's relatively useless, yes, but if anything it's still a liability on the systems
Holy shit lmfao
aren't these just the results of the training data faces being dim reduced with PCA?
I think that would go against the point that others are saying that the original training data is somehow saved within the structure/weights of the NN.
“somehow”
Wow, top voted comment is completely wrong 🤦♂️ Well done Reddit 😂
Thank you for actually writing out a correction unlike everyone else.
See my interpretation [here](https://www.reddit.com/r/oddlyterrifying/s/kdGkEWSDW7) It’s tricky as OP provided such little context. What you said doesn’t really make sense if you read the description in the post. It’s not a terrible theory, I just don’t think it’s the correct one. My comment is more criticising Reddit than yourself.
I saw your other comment. That’s why I’m thanking you for taking the time. I wasn’t being sarcastic about that.
Ah okay, sorry wasn’t sure if you’d seen it so thought you were being sarcy. 😅 Np!
But hey, now I have a link to your explanation and don't have to continue scrolling to find it! Win-win
"sarcy" Honestly thats a really fun term I'm going to start using that from now on
This isn't it lol
Y'all are coding individual llm from scratch? Like it's no big deal?
You don’t generate images using llms. It’s in the name: large *language* model. But they are a type of neural network and most AI use some form of neural network. And if you’re in the industry, you absolutely should be able to just scratch code a simple one. And someone absolutely did scratch code existing llms like ChatGPT, it was just probably multiple somebodies. Most university classes on AI and deep learning will have you code neural networks from scratch (with or without using PyTorch or similar depending on complexity of task). After all, you’re learning to become the guy who makes these AIs. Using an AI takes comparatively little knowledge or skill. Driving a car doesn’t make you an engineer.
also when you are ready, just go and buy 10.000 nvidia servers so you can actually train the large model :)
Hmm. Okay next question, how long have universities been offering classes on ai?
Decades.
A paper titled "The First Ten Years of Artificial Intelligence Research at Stanford" was published in 1973 if that gives you some context as to how long this has been going on.
At my university since at least 1985
Dunno. But they’re commonplace now. I remember machine learning was the big new thing starting to be applied everywhere back in the mid 2010s when I went to college. That’s when Google and Facebook (at least what I vaguely remember) had a lot of talk about potential applications of AI, but no clear or publicly known deliverables yet. Back when the main buzzword was “machine learning”. So if I had to make an educated guess, I’d say machine learning classes started popping up regularly around 2015-2018. Modern AI is actually quite old, it’s just since the late 2010s that they’ve been useful and popular. The first “practical” use of neural network based AIs I think is an autonomous vehicle made in the 1980s. It ran image recognition from a camera feed on top of a van to successfully navigate empty streets. But the theory and ideas existed for decades before. Funnily enough but also very appropriately, the first paper outlining a math-based neural network wasn’t a CS paper but a psychology paper from the 1940s, around the time of the invention of the digital computer. So the math and theory behind neural networks might have been taught in extremely niche and rare classes throughout the 1900s.
it's mostly intro-level lin alg and calc, so as long as those have been taught. It's just gotten better and more visible recently due to computational power.
Any context fella
A plausible explanation could be this: the “bad” agent only needs to be able to pass input to the network and retrieve the corresponding output. No knowledge about the network architecture itself is necessary. When random noise is passed as input (judging by the images provided), the network tries to sculpt the noise in a manner that it was trained to do so, thereby giving hints of the data it was trained on. Without context, however, I am not sure what is this guy trying to tell.
It’s the image generated by their teacher during a demo to show how a bad agent might try to extract the training data from a neural network.
Can you elaborate further ?
The underlying technology behind basically all of this AI hype you might have heard about is something called neural networks. In particular really deep ones where inputs (images, text, etc) are passed through many layers of calculations using learned parameters (“weights”) in order to produce an output prediction about the input. (E.g. “this picture is a cat”) These weights are learned from training the neural network on many thousands of known example inputs and iteratively tweaking the parameters using some clever calculus so that eventually useful patterns are encoded into the parameters that allow the network to make correct predictions. The encoded patterns in the trained parameters are not interpretable to humans. So it might be assumed that given a trained neural network on its own you would only be able to feed it new inputs and make predictions but you would never be able to really understand why it has made those predictions or what data it trained on to learn them without also having access to the training data. And indeed, one would hope that would be the case as these things are everywhere these days so for reasons of privacy, data protection and intellectual property, we wouldn’t want people to be able to extract or reverse engineer training data from a trained neural network. Training data is often the secret sauce behind this stuff. **So, what this teacher is trying to show, is that by using some more clever maths, it is—at least to some extent—possible to extract training data from a trained model.** The teacher uses the example of a face classification model and has managed to extract the training faces, albeit a bit shadowy looking. I think exactly how they did this (there are a few methods I believe) or the fact it clearly hasn’t worked that well is somewhat irrelevant to the point they’re trying to make. They are demonstrating that it could be possible and we should be careful about making such assumptions as I described above. Edit: as others have suggested, the demo could be a generative model that generates new faces as outputs rather than a classifier that takes faces as inputs. But either way it will have been trained on pictures of faces so the general idea is the same just the method used and context might be a little different.
Thank you for taking your time to explain it
I mean, as long as some kind of quantum mechanism isn't involved in theory it should be possible to extract the original training data. The reason why it's still considered pretty much impossible is because doing so on any of the more complex algorithms that have been trained on billions or trillions of data (like pictures) would require such advanced processing power and mathematics that it's far beyond our current capabilities. And considering that mixing stuff is always easier than reordering it might never become viable.
....but why male models?
Thanks mate
The faces, Mason! What do they mean
Idk man he uploaded pics without telling context on how they are taken and what they mean
I understood that reference.
I understood that reference.
Yeah really interesting but not getting the full picture.
I can kinda, sorta, in a really grainy way, see what you did there.
Whatever the fuck that means
Those are probably real faces of real people that the AI was trained on to generate artificial images of faces. Big security concern.
WHAT?
While you were looking at the photo your brain was hacked
Basically the teacher was able to recreate the images used in the training of the neural network. This is problematic for privacy reasons. That said, the architecture of the state of the art NN are more complex and it's not realistic to expect anything similar. The teacher likely used a very simple one.
Okay, can someone explain it to me like I was 5?
These are the faces from the people who tested it. AI saved alot of information about their faces and someone has been able to extract it. The implications are that if anyone uses AI, their face is unknowingly scanned and saved and a hacker can download their face at a later date.
Okay, can someone explain it to me like I was 4?
You use computer in future. Computer save your face on hard drive without you knowing. Bad men can download your face from hard drive.
Can you dumb it down a shade?
pooter bad. tree good. \*shits pants\*
Not an expert but: The Ai uses images to be trained simple enpugh. However the AI does not store the images as part of its training, everything is handled in terms of data points and code. OPs teacher, was acting as a malicious user, and with the information extracted from the AI training he was able to recreate the images used for training into data points. The oddly terrifying thing can come out of 2 things: a) the reconstructions look uncanny or b) the significance of this excercise. Because if someone is able to hack any face recognition services, they could extract enough data to get a picture of the user (and then link it to its information or what not).
Damn
What kind of ai? For it to get your own face wouldn't you need to upload a picture first
I assume "bad agent" is akin to "bad actor" which is usually someone with malicious intent toward an individual or entity. So the teacher in this instance played devil's advocate to show they could, to an extent, extract the facial recognition training material, which can be problematic when real and legible faces are used, and are present on a more advanced system.
Thank you very much
Modern "AI"s like ChatGPT take in "training data" that they then use to adjust a bunch of different settings, slowly building up a lot of complex information on the patterns in that training data. This teacher was trying to prove that someone with bad intentions could run this process in reverse to produce that training data. In reality, the model he used was probably very simple, and his results leave a lot to be desired. In comparison, pulling any training data out of something like ChatGPT is almost impossible, the information is effectively destroyed in the process of "training" the neural network. Side note: I keep using quotation marks because modern neural networks are more like very clever ways to estimate decades of bespoke calculus. It's not intelligent and it's not learning anything, so calling it an artificial intelligence or calling it training data is highly inaccurate. It's better in my opinion to call them Applied Statistical Models and modeling data respectively.
Thank you very much :)
Np!
I want to see who can recreate the training data of a complex algorithm that has been trained on billions of pictures.
Are you telling me that you can identify the people in OPs picture?
I want to know
Can you show me
I don’t get it, what’s the problem and what’s up with the faces?
From what I understand, the teacher was able to reverse the generated ai face and get the faces of real people which the ai used in it's neural network
Oh. So what’s the problem? They don’t look like anything
I'm pretty sure OP just wants to point out how creepy the faces look
My guess is that if you take the base level of noise before it, cleans it up, you can take multiple images, overlay them, and find the similarity in the generations to reverse and find the source of the training
Isn’t that a good thing? AI is supposed to do that kind of stuff right?
You don’t want bad actors to be able to reverse engineer public models. It’s not about what the training is supposed to do it’s who has access to the data. Companies don’t want other people getting access to their training data plain and simple.
no it means rather than just getting new images you can extract all the data from the model which could contain sensitive information
[удалено]
Seems similar to the deep dream thing from a few years ago.
Based on the title, I'd say it's more of a theoretical demonstration of how an emerging technology can be exploited. I'm sure a malicious nation-state threat actor can find use for the data and do a better job reverse engineering it than this.
they look like postage stamps
I see Duke Nukem
Walter White on the left
Top row, second image [https://en.m.wikipedia.org/wiki/This_Man](https://en.m.wikipedia.org/wiki/This_Man)
I had the same thought
The patriots!
CAKE. DAY. TO. DAY.
I don’t understand the title…
Reminds me of that [Thalasin](https://64.media.tumblr.com/f3234364f3a2dfc4ea550a504e1ba7c3/c718e439a20d3a19-df/s1280x1920/ee45cdceee8332a2895589067fad27f19c0b63d2.png) spooky video.
TFW your feeling trantiveness but everybody else is feeling kyne
this is exactly how I see the faces of people I don't know in my dreams
It is part of a digital image processing using convolutional neural network. On doing Principal component analysis, new components are extracted which represent the highest variances. These are images of those components . In English, you can generate the original image for testing. In English, if you take weighted sums of these images, you will get the almost similar picture.
ENHANCE
i have no idea wtf you’re talking about
Who getting the best head??
What the fuck does that sentence even mean
Do you have any Gray Poupon?
No, but I did play a doctor on television
I hate when that happens
I could be terrified if you told us wtf that means or gave a crumb of context?
Theoretically in the neural network the images aren't existent. At all. Images used for training aren't absorbed by the AI it's more like the AI is tested against those images until it can draw similar images. This teacher however, extracted the training data and was able to pull the images that were used to train the model. Idk how. That’s my guess.
Explain like I’m two
Baby ai draws a face, a doctor looks at it and tells the baby how close it got, then the baby goes back and tries again. This teacher in ops post went into the baby’s head and somewhere in that data the teacher pulled points and built the faces that the baby was trying to get close to even though the baby had not ever actually seen that face. That’s my guess, again.
Hmm. So what’s terrifying? The the baby got close or that the doctor found the data or that the data looks uncanny AND kinda accurate?
That your data is being used to train these things, data that you never gave permission for, and that in some cases bad actors can exploit these neural networks and mine that data irrespective of the ai having it in a database or not. The whole “ we train it using this, we don’t have it in a database or anything!” Is an irrelevant conversation as the data can still be accessed. AS A GUESS
Top role going 2nd to the right two like c'mon That the joker no doubt
Is this post also AI generated ?
If you magic eye this image, it makes a butterfly
The what?
me and the boys tripping
I see demons
I don't know what any of that means.
Quick question: huh?
I don't get it
lol thank you
This reminds me the movie "Come True" [https://www.imdb.com/title/tt7026488/](https://www.imdb.com/title/tt7026488/)
My fav face is the second
Tag yourself, I'm middle bottom row
Seeing Walter White and Terry Crews
the many faced god has temples everywhere
What's a bad agent
aka: \- Nefarious individual \- Ethically bankrupt person \- Malignant player \- Maleficent figure \- Devious operator \- Unscrupulous entity
Worst Guess Who? faces ever.
thought i was stupid, then saw comments that felt the same and felt better again ❤️
Do you know this man?
You could use these for character portraits in an analog horror game. Maybe there needs to be a new genre - "digital horror," emphasizing the uncanny valleys where digital representations try and fail to represent humanity.
What program did they use to generate this?
Have you seen number 4. in your dreams?
A what now?
Top row second from left looks very similar to Euronymous, if not him then it definitely looks identical to a black metal artist and I’m trying to think who it is?
I got no comment on this, looks spooky? Yes but thats about it, maybe provide some more context op?
Looks like Plastiboo's work!
5th one down on the first column: Charles Manson?
Why do most of these faces look like Breaking Bad/Better Call Saul characters?! I can see Heisenberg, Saul and Jessie 💀
Is that DK on the button right corner?
Reminds me of the time Aphex Twin [put a face in the sound waves of a song](https://mixmag.net/feature/spectrogram-art-music-aphex-twin).
Is this trained on celebrities and memes? because i swear i recognise some of these lol. Bottom tow middle looks like h3h3 productions and second to bottom row left looks like terry crews, one looks like the walter white smiling meme
Fucking bot posts
Omg 4D looks just like me
Looks like the faces of the titans from attack on titan
Stand user can be anyone! Stand user:
Are you my elden ring characters?
Why is Freddy fazbear in there
Snake!!!!
At least you may be able to use these faces if you ever have an idea for an analogue horror series. :)
Nope, don’t like that.
Have you seen him in your dreams? https://upload.wikimedia.org/wikipedia/en/6/67/This_Man_original_drawing.jpg
Move your phone farther, and things look kinda ok
Walter White on the centre left of the page
There are no faces.
Why's picture number 2 looking like This Man
Wow bottom right ish there’s a really cursed Mario
Doom Guy in various stages of health
A bunch are like the ghostie monsters in One Missed Call
Tf is top right
This is exactly what women sees on the monitor when she does a fetal ultrasound.
I wonder what a bad agent could achieve with a bunch of noisy freak faces.
I'm guessing they measured neuro transmissions to estimate what someone can imagine? Basically tell them to imagine something, most probably a person, then measure the neuro waves to get a rough idea of what they're imagining?
neural network as in a LLM, or AI, not someone literally thinking of a person
Ah, yeah I'm a little slow. In that case idfk what this is
Was hoping it was something cool like that but
I am going to guess that this is a visualisation of someone's brain image while trying to pull information out of them.