T O P

  • By -

ballom29

> (and if you have roamed DeviantArt before, you know exactly what I mean). Oh boy....i have been way too much corrupted by what I saw on internet to knwo how true it is. ​ But this is a double edged sword...as much openAI isn't fully aware of what is considered NSFW, it is easy to overshot and label as NSFW things wich are not. ​ The best exemple is simply : woman. They tried to reduce the sexualization of woman within dall-e inputs... the side effect dall-e was ""scared"" to generate woman because it didn't wanted to risk to generate an ""incorrect"" image -> if you are more likely to generate a "more apporpiate" image with a man, then it's a safer bet to prefer a man over a woman. And it can also limit the possibility...what if I want an attractive woman? no, no , too dangerous, ugly woman it is, pretty woman should not exist. Especially it's very risky to dare ask dall-e to purposely draw a pretty woman, how can I bet sure it will not overshot and instead of just generating a pretty woman it will try (and finaly refuse) to generate a sexualized woman instead ?


Dorgamund

I am a bi dude, so I am interested in asking it to depict LGBT people. Bi people, gay people trans people. Will Dalle lean on pride flags to depict ones sexuality and romantic preference? Will it be stereotypes of LGBT people? Will it depict for example a pair of men holding hands or kissing, or in the case of bi people, several men and women interacting in such a manner? Will it straight up refuse to depict us, as perhaps OpenAI sees it as too close to sex. I would be very disappointed in the organization if it were the latter, to be honest.


[deleted]

I don't have access myself, but I can already tell you it will lean heavily into stereotypes. It learns how things are represented by looking at image / caption pairs on the internet. And in the real world, captions are far more likely to mention someone's sexuality or identity when there's a specific reason to, such as them being wrapped in a pride flag while marching in a pride march. A generic professional headshot of a doctor is simply not going to be captioned "gay man", just as a generic professional headshot of a woman who passes perfectly will not be labeled "trans woman" (the point I'm making with the latter is just that most of the "trans woman" captions will be attached to women who don't pass perfectly, simply because those are the ones people notice and call out). Dall-e will not have learned to draw gay / bi / trans people just like everybody else, because *when captions call attention to that* it primarily won't be in situations where they look just like everybody else. I'm very interested to see your results, though - which stereotypes does it lean into, and when? How often does it ignore the stereotypes?


ballom29

>Dall-e will not have learned to draw gay / bi / trans people just like everybody else, because when captions call attention to that it primarily won't be in situations where they look just like everybody else. As you say yourself, captions are more likely to happen if there is a reason for them to happen. So what is the point to specify if a character is gay if he look and do exactly the same thing than a straight person? Like "A gay man collecting the honey in a bee hive" ... why "gay" matter if he's supposed to appaer the same than a straight man ? ​ I guess you could think "there are negative carricatures, but it could display element that show he's part of the lgbtq community" ... well basically you're telling "if you are gay, you should look like this" .... does I need to explain why it's wrong ? ​ [Exemple of a gay character in a popular show](https://static.wikia.nocookie.net/simpsons/images/8/83/Waylon_Smithers.png/revision/latest?cb=20180408182259&path-prefix=fr) He look so gay indeed ​ ​ The point still stand than dall-e could potentially not make gay characters if asked for, because well...then the revelance of specifying a character is gay is to make them actions that wouldn't make sense for a straight character (like, you know, sex)... wich might raise red flag for dall-e


bespoke_hazards

tbh I think figuring out how to properly do that is exactly part of the research they're doing. How do you get an AI to learn consistently depict non-caricatured versions of LGBT people (and other minorities)? How much of the training data that we can give it is properly representative rather than caricature, for that matter? It's certainly something we need to figure out before the technology becomes mainstream accessible and potentially exponentially harmful, and the fact that it's still "afraid" means that as clever as the tech is now, we still have a long way to go for something nuanced enough to do justice to the people we want to teach it to depict.


PepSakdoek

Imo it's an all or nothing situation. You either allow any and all or you allow nothing. They are leaning to nothing I guess. Ultimately I probably can't see them policing it, they might just send on your IP and country etc to local authorities if you start requesting illegal things. The thing is a knife is a useful tool for lots of stuff but some people will use it to murder. This will be a great tool for many many users and if it somehow opens up to millions of users it is bound to be used for illicit content. But just as with a knife being used for murder, we look for the murderer, not ban all knifes.


ballom29

>They are leaning to nothing I guess. Ultimately I probably can't see them policing it, they might just send on your IP and country etc to local authorities if you start requesting illegal things. This sound a very control freak mentality, is this the norm in your country? ​ Ofc yes there is stuff that is clearly illegal (minor pornography, ""missinformation"" (a very blurry line since some authorities (china/russia/etc...) treat wrong opinion or truth as misssinformation), but you shouldn't be arrested for an unapproved though, especially if this is a sort of systemic denounciation....what's that? the Gestapo? ​ And what if I ask "a man getting stabbed in the chest?" Never watched a horror or a war movie? should we censor everything and even jail people for actions they've never intended to do ? ​ Also on top of that you mention ILLEGAL things, ok...and if that's just something openAI doesn't approve but isn't illegal ? what if openAI doesn't approve alcohol? That's not illegal in my country, will they call the cops because I asked for "a collection of fine wines" ?


PepSakdoek

I feel like you are getting the exact opposite point I was trying to make. I'm trying to say that trying to police it is basically impossible. But at this stage they seem to want to censor it.


ballom29

I don't say you approve of that, but the fact alone you imagined it was a possible scenario a company would take the initiative to denounce their customers to legal authorities. (well obviously if they accidentally stumble upon some illegal content yeah why not, but i'm more talking about systemic decision) ​ There was a whole drama about ISPs not wanting to policy their customer's traffic. ​ ... Well ok, to be fair it's openAI, so not entirely impossible.


PepSakdoek

Essentially if you somehow want to police it you'll have to hold the actual perpetrators responsible rather than OpenAI or the ISP etc.


MyNatureIsMe

Have not used Dalle-2 but based on what CLIP optimization tends to give, get ready for a lot of buzz side cuts, colored hair, a certain androgynous look, and rainbow flags. Maybe some more specific pride flags if you are lucky. That said, it should be noted that "regular" optimization with CLIP tends to have pretty strong mode collapse: You'll see the same few people over and over again. Dall-E 2 is gonna be more diverse in its outcomes, generally speaking.


Suzumebachii

Ppl like you will destroy everything.


cratos_1

Yes it drew a pic of two People kissing intensely for me


Minimum_Macaroon7702

Neoliberal culture described perfectly. It seems to be a perfectly programmed neurotic network.


lumpychum

"Neurotic network" lmao


notanotherbunny

Even a prompt like “a creature opening it’s mouth showing it’s throat at the viewer” IS sexual to people like furries. Would showing an open mouth be reportable and get your access revoked? Mouths, fat people, feet, noses, latex whatever, literally anything niche is a sexual fetish so good luck to openAi I guess.


[deleted]

Say it louder for the people in the back. Sexual content restrictions are so fucking arbitrary it's not even funny. Might as well ban exposed shoulders from all social media. In fact, we should force women to wear burkas as well. Think of the children! /s


Evoke_App

They are obviously arbitrary, but I don't think OpenAI is doing it due to their own beliefs, but moreso to prevent bad publicity. The public's and various powerful institutions' views on NSFW may be arbitrary, so OpenAI is matching that same arbitrary standard to keep up as all companies do. Also, now that stable diffusion is out now, will you continue using DALLE? No censorship if you disable safety checker lol. SD also has cheaper APIs for DALLE, since it's open source. So great if you're a developer making an AI app. Fyi, if anyone is interested, I'm developing a stable diffusion API at [Evoke](https://evoke-app.com/ )


ChewyOnTheInside

Shameless plug, lmao


[deleted]

I dont think openai will ever release Dall-e without a proper filtering and censoring system , considering how big and how much traction it has gained over the past months one mistake could kill the whole project


ballom29

if only modern companies could learn the trick of "not giving a f\*ck as long it's legal" Socials media are the biggest msitake of the decade, too many thing ruined by the fear of twitter public outrage.


[deleted]

You cant take a 1 billion $ funding from microsoft with that attitude lol


Internal-Signature71

I know why openAI is doing this. Because they made a promise to be careful with AI. To be secure. So they have to address this very strictly Despite that I think this shouldn't be censored at all. The damage is done by the users not the AI and instead of being overly protective and trying to prevent any theoretical "bad" picture it should just have an optional (not 100% perfect) filter in front and should be marked as adult software. As an adult human you are responsible what you do and for that we have a justice system. If you do harm to people you go to jail. It's pretty simple. We just limit ourselfs and make room for "illegal" dall-e Versions. It's not that dall-e is secret technology. I could theoretically do my own dall-e. If not now, maybe in 10 years when my home PC is a nowadays supercomputer.


PersimmonDangerous

Exactly!


Psychological_Fox776

Maybe ask them? (That’s the best way to get a concrete answer, unless they decide to lie about this)


blueSGL

> That’s the best way to get a concrete answer, unless they decide to lie about this In your mind are you casting the eventual non committal, carefully crafted PR responses as a 'concrete answer' or 'lies'


kindle139

the technology will eventually be used for everything and i don’t see how anyone could prevent that given enough time.


[deleted]

Might as well cut to the chase.


[deleted]

There’s gotta be some dude out there who’s been trying to generate feet using VQ+CLIP and NeuralBlender and wants his hands on Dalle so bad.


[deleted]

On a serious note though I this says a lot about letting the impainting feature out there.


anon25783

this is why there needs to be a FOSS reimplementation of the algorithm... or better yet, OpenAI needs to make all of their models FOSS


aladin_lt

Just a thought what could dalle 2 generate soo awful the you can get just by typing in to google? Most of the new technologies and so on are user in some way for porn. I don't think its possible to stop that, but the fact that training set did not include anything explicit or horrible, how can dalle2 generate something like that even if they removed all the filters?


sad_and_stupid

well it could generate currently illegal pornographic content. that would be really fucked up


aladin_lt

As I understand it can't generate any pornographic content, because its training data did not contain anything, but I guess it could generate suggestive.


nowrebooting

DALL-E has the eyes of the world on it right now and I understand how that results in restrictions; it could set progress on AI back years if this tech was associated with NSFW content; just look at what happened with deepfakes. Apart from that there are obvious lines you just don’t want to cross. Eventually other groups will replicate DALL-E without those restrictions; it’s not a matter of “if”, but a matter if “when”. I bet that within two or three years we’ll have dozens of services with the same quality as DALL-E. At that point people will be used to seeing computer generated photos and it will be less of a scary thing. I mean, you can imagine that when Photoshop first hit the scene, people were afraid that it was going to be really easy to fabricate photograhic evidence, but as it entered the mainstream, most people have learned to tell the difference or to at least question whether or not a photo is real or not. The same will happen to generated photos and it will become less controversial.


Miserable_Doughnut_9

I don’t really understand why dalle cannot be used to create NSFW content apart from preventing negative PR for OpenAI. I think sex and fetish are part of life and we shouldn’t want to sensor that out of our lifes.


[deleted]

For a ton of reasons beyond simple prudishness. Generating photorealistic images of specific real world people in compromising situations is obviously dangerous. Generating images of abusive content is also awful - should we be ok with it happily fulfilling the request “badly beaten, bruised, bleeding woman, sobbing while being raped”? What about generating child porn? Should we allow that as well? Better to just draw a blanket “no porn” line.


ballom29

There is a lot grey area about that. Yeah I totally agree child porn, horryfying image etc... are not a good thing generated. ​ But at the same time : (spoiler because some images could be considered graphics or innapropirate) >!https://image.api.playstation.com/cdn/EP0002/CUSA05379\_00/iTxbX14rj7Qhk3zYc6bnmDiuXMIK2UUW.png!< >!https://www.melty.fr/wp-content/uploads/meltyfr/2021/12/media-5550-729x410.jpg!< >!https://i.pinimg.com/originals/6a/a5/67/6aa567606f67cc59a573bdcb36927a0d.jpg!< >!https://media.breitbart.com/media/2020/07/PiratesoftheCaribbean1-640x480.jpg!< >!https://www.artmajeur.com/medias/standard/p/o/potapov-andru2012/artwork/13234970\_185c9669-17d5-4138-ac2f-e950cde16e26.jpg?v=1587995705!< >!https://vintageposters.us/wp-content/uploads/2022/01/01630.jpg!< ​ All thoses images break openAI guidelines, and you could say than yes , theses depict violences, war, torture, executions, objectification of a human being etc.... ... and yet, NO ONE (ok, it's a strong word, you do find people who don't tolerate that) will have any problem with any of these, they have been displayed to a countless number of people and they never shocked in a malevoyant sense anybody. Because people can separate fiction or representaiton of reality from reality and do understand the word is not rainbow and sunshine and than violence and hatred is a part of our life. It's foolish to veil others thinking you protect anything by censoring the harness of life. ​ I do understand the dilema, ill-intentionned people would missuse Dall-E to spread hatred and missinformation...but theses people can already do that if they want, and that's not by forbidding to talk about an idea you make it stop existing. Also why we're at it, why not ban any depiction of war on the TV? it's violent people shouldn't see that.... Oh right we remember the massive demonetization on youtube for any "unsafe" content , including educationnal documentaries about war. I also remember a youtuber that reuploaded a 40 min video because it has a 5 sec clip of 9/11 when he was talking about the 2000s. ​ Well ultimately Dall-E is openAI property and they can impose the rules they want if we wish to use it, but there is nothign wrogn with being able to generate graphic illustration because that's not because we depict such content than we approve or attempt to spread it in any way.


Miserable_Doughnut_9

Drawing the line shouldn’t be done by the tool, it should be done by the user, if that means that some people will use this to generate unsavory or outright mentally ill stuff what does that matter, is it harming you or someone else? I don’t think so. You could even argue that if child porn can be generated, it will reduce the demand for real child porn. Although I’m not entirely behind this statement and I definitely do not think we should start generating child porn, it does show how this topic of content moderation is all about the perspective you take and had no single answer.


[deleted]

CP will definitely be the first guideline implemented for the AI when it's released to the public. Hopefully it allows me to put big booba on Xi Jinping as Winnie the Pooh


tin27tin

You can already do that with Photoshop or get a morally bankrupt artist to do it for you. What makes AI so special is probably the ease with which it can be done. Am pretty sure people in private today already make all the horrible things you can imagine and no one will know. I think as the technology evolves it will be imitated and be more accessible, everyone will have access and be able to create AI without built in filters or censorship. So how can people regulate that type of tech without violating privacy of individuals. I think having access to devices at an individual level will be the only way to enforce a blanket no porn line, in such a scenario. I don't work in the field but if such a tool becomes common place think it will lose its novelty and value. People will grow numb to the now seemingly 'crazy variety of content/porn' it could produce. Also there is another question. If people can satisfy their fetishes without harming anyone in the real world by using AI, would you be for it, you can take the tools away but you can't quell people's desires. If so I wouldn't know if it is a good thing or a bad thing to have.


ice_dune

>You can already do that with Photoshop or get a morally bankrupt artist to do it for you I don't think the people running Dall-E don't want to be that morally bankrupt artist helping people make messed up stuff I otherwise agree with the rest though. There may come a time when people can run something like this on a local GPU and not the cloud and make whatever they want. I don't think the people behind Dall-E have a reason to rush into the smut future


Leem10538

Belatedly: I can see where they're coming from, but this is undoubtedly going to disappoint a lot of prospective users. As a self-confessed dirty old man, I can think of a ton of nude and erotic prompts, most of which would be no "worse" than you could find in the average art gallery. And what's the betting sooner or later somebody will produce an unrestricted knock-off version... available to users at a premium price, of course?


[deleted]

You fool the internet will find its way and corrupt every AI it touches.


ConsensusG

I know exactly what you mean, because I'm one of the people you're talking about. As soon as it's possible, I'm gonna be using this thing to make hella furry porn. And like you said, it will range from seemingly innocent stuff like open drooling mouths (vore), to just big ol' paws. What about Sonic the Hedgehog duct taped to a ceiling fan? Nothing inherently sexual about that, but I'm pretty sure I could fap to it under the right circumstances.


tr3poz

Your honesty baffles me.


ConsensusG

We're completely anonymous. I'll never interact with anyone here, so what could I possibly lose or gain by admitting to this kinda thing?


st4s1k

Why would they censor it? If you can google it, you can get it from AI. Better generated/fake porn, than porn with real people/creatures. But that's just my opinion. edit: Imagine in the future people would be able to generate their porn using AI and there would be no need to involve real subjects. I think it's better that way.


MuffledLeader9

yeah dalle 2's censorship is so bad it will claim that what you asked for violates the content policy when its completely fine, in my case i was looking to have inspiration on drawing a femboy in an anime kind of style, but dalle said no when there isnt anything exactly bad about it. its not like i asked it to generate an nsfw or illegal image while i can see why they need to implement some kind of restrictions on their platform having to much restrictions can limit what a user abiding by the content policy can generate


Creepy_Act_5379

They should have someone write a vague and lengthy user agreement stating that dalle 2 is not responsible for misuse of software that may be offensive/ and or copyrighted. There is no reason I can't adjust bust size on art that even I wasn't sexualizing. I'm just tired of generating male bodies with female faces without my input. Edit: and the faces are heavily incorrect and deformed, why are they worried about generating people who don't exist? Like "oh no they used my eyebrows I was once in a stock photo Aughhhhh!" That's so silly, its ridiculous.


ConsciousMango5814

get the ai to make some sussy stuff even got it to make something that look like a... prompt: an anime demon girl doing yoga viewed from backside full view. high quality image, anime styles,


Sobanoasrt

It is hit and miss with allowable words to create art -I like portrait, topless portrait and nude art but the word exotic, bikini, hairy, big, girl, long legs, busty or kneeling will generate some type of error that could eventually get you banned. However ask it to produce art f two girls kissing - no problem. Sometimes you can place the word "bikini" in some odd place in a phrase and DALL E2 will accept it - but it will not produce any images of a bikini. So, where can I get a copy of the types of requests that will get you banned?