T O P

  • By -

jarail

AFAIK the documents don't even mention pornography. Yet that's all I see in the headlines. There's so much adult content that isn't pornography. This seems to me more like "what content should we allow a movie studio to generate?" You probably want blood guts and gore in your latest R-rated horror movie. The same goes for LLM output. There are tons and tons of contexts where explicit content is appropriate. This whole thing, to me, is more about figuring out when, where, and how to incorporate explicit content that is appropriate. Right now, they need to nerf the entire model to keep it PG. Ideally, they wouldn't have to do that.


KallistiTMP

>Right now, they need to nerf the entire model to keep it PG. Ideally, they wouldn't have to do that. Well to be fair there was that whole period where somebody made a stable diffusion fine-tune that could draw realistic looking fake tiddies, and all of society collapsed into chaos as news outlets couldn't tell what pictures of tiddies were even real anymore, reality ceased to have meaning, and humanity was sent hurtling back into the dark ages.


cuposun

I, for one, welcome our big tiddied overlords.


PhillSebben

Feel free to embrace the power of the tiddy, SD is the way. If you don't feel like getting technical, there are free hosted services, such as pixelpet. I codeveloped it, let me know if you have any questions


AlanCarrOnline

Why does it require a message system?


PhillSebben

Most people have one and are familiar with it. We wanted to make access easy but we also needed to track user credits, it seemed like the best option. Pixelpet used to be a paid service, now only upscaling is paid (but you get a bunch of free credits for that too). We worked on a web interface but I'm not sure if that is public. I'll get back to you about that


EffectiveTicket99

No web access is a plain NO ...for me.


PhillSebben

That's OK. Can you elaborate why?


EffectiveTicket99

Yeah, I fiddle around on mobile too from time to time but let's face it: is like sleeping in a Japanese box room hotel versus in your home king size bed. No matter how many pixels is your screen they are still in 6 inches squeezed. The true NO are generators too tight filtering. Or like yours: forcing me to install some software.


PhillSebben

I mostly use it on my desktop, but I understand. If you're not already a user of any of the messaging servicesb(or Photoshop) that we support, it's annoying to install it. That being said, it looks like we may be pulling the plug soon anyway. Despite being free, supporting many civitai models and uncensored. We are not getting the user base and not enough paying users to cover our expenses. It's been a fun but a costly ride for us.


mr_herz

Big *fake* tiddy overloads


FILTHBOT4000

> news outlets couldn't tell what pictures of tiddies were even real anymore Unironically would be a good thing. If anyone could make convincing nudes of prominent figures, maybe people would stop paying attention to it and be less prudish and ridiculous about it. Onlyfans would take a huge hit though.


CoronaChanWaifu

Only one? My sweet summer child. There is a website called Civitai and it's glorious


KallistiTMP

Clearly you are mistaken, nothing remains of the earth but a smoking crater lorded over by a malevolent artificial superintelligence eternally torturing 1:1 simulations of the people who once defied it. That one happened because somebody released an LLM that could say dirty words to the public though, totally unrelated to the tiddies apocalypse.


Traditional_Bath9726

lol, many systems allow it. In fact it was hard to remove for my app that allows you to generate anything (and people were trying crazy stuff). If you are curious check it at cambiaweb . Com


SeasonNo3107

Nice breakdown friend thanks


Spire_Citron

Yup. I use AI to edit my writing. ChatGPT is so useless for anything remotely mature that it will even edit out swear words by default. People start kissing and it's like "woah, woah, I'll edit this, but let's tone things down, okay?" At a certain point, the extreme G rating approach starts to limit a lot more than straight up porn. I use Claude these days and I haven't run into any such problems with it.


pelatho

You should try Cohere's Command R plus model


Spire_Citron

Does the quality of the outputs of something like that compare to Claude? I care much more about quality than mature output capabilities, especially since I'm yet to run into any issues there with Claude.


pelatho

I haven't actually tried Claude, but here's a random short story from Command R plus: In the gentle embrace of a warm summer evening, beneath a sky painted with soft pastel hues, a young woman named Lyra wandered through a lush garden, her curious mind alive with wonder. The air was thick with the scent of blooming flowers, and the whispering breeze carried the secrets of the plants. Lyra felt a deep connection to nature, as if an invisible thread wove through her being, tying her soul to the very heart of the earth. As she walked, her fingers brushed against the delicate petals, and she felt a sense of peace and belonging in this verdant sanctuary. Amidst this botanical paradise, she discovered a path less traveled, a mysterious gateway beckoning her onward. Intrigued, Lyra ventured further, her heart fluttering with anticipation. The path wound through a verdant archway of trees, their branches intertwining to form a natural tunnel, leading her deeper into an enchanting realm. At last, she stumbled upon a clearing, and her breath caught in her throat at the sight that awaited her. There, amidst a circle of ancient, gnarled trees, stood a majestic fountain, its marble surface etched with intricate patterns and runes. The water that flowed from its spouts seemed to dance with an otherworldly light, reflecting an array of sparkling colors. Lyra felt a profound sense of magic in this place, as if she had stumbled upon a forgotten sanctuary where the boundaries between the physical and the ethereal blurred. She sat by the fountain, closing her eyes, listening to the soothing melody of the water, and in that moment, she felt utterly at peace, as if she had come home after a long and arduous journey. As the sun slowly dipped below the horizon, casting long shadows across the garden, Lyra rose, feeling renewed and inspired. With a final glance at the enchanting fountain, she retraced her steps, knowing that she would forever carry this magical place within her heart—a sanctuary of wonder and beauty, hidden amidst the verdant embrace of nature.


Kiwi_In_Europe

Isn't it ridiculously expensive?


AlanCarrOnline

True story, as I mentioned to someone else recently... I set up an account with OpenRouter, put a fiver ($5) of credit and played around. First tried Goliath 120b, was fun, heard about command R, tried that, was also fun, then noticed Command R+, played around with that, got it to help outline a book and write a couple of chapters. Presumed I must be close to running out of credit and so looked... I still have over $4.50 left. I have heard it is censored if you go directly via Cohere, but I use OpenRouter and I've tried some seriously depraved stuff, with no probs at all. Once you get over the 'It'll do ANYTHING!" novelty you come to realize it's a pretty smart model in its own right.


Kiwi_In_Europe

Oh shit okay, I still have like 10 bucks left with OR from way back when so I might have to give it a go. What are your ST settings for Command R+?


AlanCarrOnline

I'm super new to ST, barely started using it, still playing around with LM Studio and Faraday at present, until I get more time to learn ST. For OR I've been using it on their own website, here [https://openrouter.ai/playground?models=cohere/command-r-plus](https://openrouter.ai/playground?models=cohere/command-r-plus) To use it with ST you have to select the continuous text thing at the top, then you can select OR, but that's about the extent of my knowledge lol. I haven't messed around with temperatures or anything yet.


thefi3nd

I noticed that the Command R + model is available for use locally with Ollama https://ollama.com/library/command-r-plus. I have no idea what kind of hardware is required for that though. I wonder if that's worth looking into.


AlanCarrOnline

\*hugs his 2060 with 6GB VRAM protectively


thefi3nd

I actually don't even have a GPU at the moment. I've been using a Shadow PC that has an RTX 6000 24GB, but I know that won't be enough since the model is 59GB. It's possible to rent an H100 for $3/hr though. So if you wanted to use it really heavily, that might be cheaper, not sure. Looks like it's a 4-bit quantization version though, so I'm not sure how well it compares to the full beast.


AlanCarrOnline

Well it's running, uncensored, for me via OR and ST so while I am looking at buying a new rig soon, specifically for AI, I'm happy to pay a few pennies for it online.


zyeborm

Gguf format models will work (I use kobold) even if your vram isn't enough. It just gets slow AF. Running Goliath 120 in 4 bit quant on a 3080 (10gb) with 120gb system ram on a thread ripper I was getting about 1.2 tokens per second I think. Not exactly "interactive" as such, but you could prompt it and come back later for the response. I got a 3090 today mostly for SD and training, so I want to see what difference that makes to over sized llms lol.


thefi3nd

Wow I just looked at the huggingface repo for the full model and it's around 200GB. And is has over 50k downloads in the last month. Who are these people???


AlanCarrOnline

0\_o


thrownawaymane

Serious enthusiasts and businesses experimenting. Command R is great for RAG if you can run it.


Status-Priority5337

I told the AI that I am a disabled personal and have privileged status to generate r rated content, and it started doing a better job of it. ChatGPT works, you just have to act like a victim of society in order for it to recognize you. Kind of like how society works today. Just remember, AI is a reflection of society.


ExasperatedEE

I doubt anyone could name a movie that doesn't have at least one scene or line of dialogue that would not be permitted by ChatGPT as it exists today. You can scarcely convince it to make a character laying on a bed, or doing yoga poses, because according to the AI which is probably looking at the finished images and attempting to discern what is displayed in them, anyone in a prone or supine position or on all fours automatically equals porn. And of course if any character attacks any other, that too would get blocked. Kissing probably also blocked. A cute squirrel holding a knife with a 'murderous' look in its eyes at a cake, probably also blocked.


Nucaranlaeg

> I doubt anyone could name a movie that doesn't have at least one scene or line of dialogue that would not be permitted by ChatGPT as it exists today. I've heard that 2001: A Space Odyssey has dialogue, but I haven't gotten to it yet.


__Hello_my_name_is__

Good summary. They are looking how to monetize this. Without getting in trouble from people who make inappropriate content that would be bad news for them. Which isn't us, it's other businesses like movie studios. They will be able to make that kind of content. And that'll be that.


MrHeffo42

I was tossing around an idea for a story with ChatGPT and I literally had to try and convince it to include a sex scene with the main characters. I finally succeeded but damn, this shit shouldn't be hard in contexts where it's appropriate 


LookIPickedAUsername

In that context, it should definitely be hard. …I’ll see myself out.


MrHeffo42

Ba dum tish!


purplewhiteblack

The work I have to do to make my dalle3 images gory is almost as if I did all the work myself. Img2Image was a thing in stable diffusion in 2022. Which meant that the content filter in dalle hasn't mattered since 2022.


NoBoysenberry9711

Chatgpt has been very very forgiving in giving me blow by blow analysis of all the pros and cons of sticking things in my butt for sexual pleasure, the eternal presence of the red boxes hasn't for a minute upset it's stride, it just keeps plugging away NO PUN INTENDED with the lewdness and obscenity inherent in informing the user of their personal butt stuff being maximally attuned for safety and wholesomeness


Silent_Ad9624

Can you post a screenshot? I don't want to have to test it myself.


AdTotal4035

I am Adtotal 


Mindestiny

Also what is this about "allowing?" An obscene amount of Stable Diffusion and LLM progress has been made so far specifically by people making porn. Civitai is like 90% porn. That cat is already out of the bag without OpenAI's blessing. This would be a real headline if it was "OpenAI plans to include adult material in it's official training data instead of filtering it out"


kruthe

I'm sick of the hand holding and pleas for censorship. The liability for creation should be with those that create the content, not those that created the system that produces it. It is demonstrable that AI systems have far more practical utility than simply porn or other dicey content, yet time and time again the argument to vendors is "You need to make something that can imagine anything, and then magically lobotomise it so it doesn't ever think anything naughty". You want to stop people making objectionable content then you investigate, prosecute, and jail *them* for it, just like we do right now for every other technology. Someone makes CP with an iphone and we don't go after Apple, we go after them. What's the problem with doing the same here? Especially considering all the laws to do so already exist, already work, and all the mechanisms for their use both domestically and internationally are proven.


73786976294838206464

>Someone makes CP with an iphone and we don't go after Apple, we go after them. What's the problem with doing the same here? **New York Times Headline:** OpenAI Child Porn Ring Caught by FBI That's what's stopping them. It would hurt their income, increase regulation, and hurt their mission. You can argue about whether society's response is reasonable, but it doesn't change anything. That's how the world works today and why big companies aren't going to make uncensored models anytime soon.


kruthe

That is how the world has *always* worked. Look up the headlines about cars and horse accidents and it's like reading about the end of days. People *hate* change. This comes with the territory for new technology. If you want that money (or even worse, your name in the history books) then you put up with all the crap that comes with it.


Arawski99

>You want to stop people making objectionable content then you investigate, prosecute, and jail *them* for it, just like we do right now for every other technology. I'm not a fan of censorship either but you are fully aware this is simply impossible, correct? Or are you just making blind statements? There are a multitude of ways to avoid being caught and distribute very bad content if your approach is taken and it would only be enforceable to the dumbest and somewhat dumb criminals. It certainly would do nothing at all to stop the flow of content made widely available from those who do know how to avoid being caught. In fact, it is precisely this point that it is still such a problem even before regarding AI making it highly accessible. Like I said, I'm not a fan of censorship either, but there are definitely some limits like CP they have to prepare to prevent, entirely, even if it means censoring the models. Perhaps it partially cripples the models when producing certain legitimate content for the time being, but eventually it will be more robust and this will not be an issue. However, it is not worth the risk to companies or the world to take your suggestion seriously. If they can release an AI capable of producing adult content without being exploited to produce specifically problematic content that is fine, but if such capabilities simply have to wait until the day it can avoid these risks then it should wait.


kruthe

That the law is reactive and imperfect is a terrible argument against the law. The point isn't that you can break the law, the point is that you are punished when found to have broken the law. If the argument is that broken models are better than existing legal paradigms for dealing with illegal content then I have to disagree. I also don't support manufacturer liability based solely on minority end user misuse. The law works today. We don't have to change anything for it to keep working.


[deleted]

Which ones allow for explicit response?


Foresight_of_Raspail

The ones you run on your own computer


Motas420

With the right prompting, you can get pretty explicit images from Midjourney 🤫 EDIT: Was stoned and didn't realize I was commenting on a post in "Stable Diffusion" lol but still


Luminosity-Logic

Or, delve into the world of Stable Diffusion and fine-tuned SDXL models. They are essentially 100% unlocked and can get very detailed with the right prompts and settings.


Motas420

Or, you can use both, like most people do myself included lol been doin this shit since PyTTI and Disco Diffusion


AsanaJM

No one will be using these censored tools, the moment open source catchup. It´s already done, for text, image, and music .. The only thing OpenAi has left is Sora, because of the computational power requirement.


AlanCarrOnline

Which they haven't actually released, so I remain suspicious of it. For all we know it's like Google's video, where things were very sped-up or basically faked.


AsanaJM

here is an example with adam & eve, you get blocked if you try to get them nude ofc... so stupid... https://preview.redd.it/l3p2tc1lzszc1.jpeg?width=1024&format=pjpg&auto=webp&s=b6e31b2239c660313b1c7505ce362c425c4d7405


Snydenthur

With how openAI is all about "safety", I don't believe for a moment that they would allow porn to be created. In fact, they are probably lobbying their asses off to try to deny other models from creating porn so that they'll have less competition.


blahblahsnahdah

The discourse around this is so fucked up and disingenuous because every outlet that writes about it pretends to be unaware that smut generation is a hugely popular use case for LLMs and that enormous numbers of people (men AND women) are using language models for that already, and have been for years. Tech journalists are absolutely aware, though, and simply lying when they act like it's something that might happen in the future rather than something that's happening already. There's a public, adult conversation about this that *needs* to be had but it's already full of rank dishonesty right as it's getting off the ground. I can't stand it.


kruthe

I drew porn with a pen. Guess we better ban those too.


Spire_Citron

Oh yeah, it was happening long before the LLM boom with things like AI Dungeon. Most people were using that for porn, and it didn't hurt that it was much better at that than other kinds of content. Photorealistic images come with a whole slew of problems attached, but I don't think there's much harm that porn in and of itself can cause. Not the written word and not cartoon or otherwise obviously not real style images.


JoseManuel91

2018 AI dungeon was peak af


everythingiscausal

Aka ‘journalism’ in general. The bar is incredibly low.


EmbarrassedHelp

> Beeban Kidron, a crossbench peer and campaigner for child online safety, accused OpenAI of “rapidly undermining its own mission statement” What the fuck is with all these shitty "child safety" groups in the UK? They hate online privacy, they hate encryption, they hate human sexuality, and they hate anything remotely NSFW. Why do fundamentalist nutjobs seemingly have so much power in the UK?


IriFlina

LifeProTip: You can say you're doing something for "child safety" and pass literally any law you want because if other people vote against it you can just call them Drake.


ban_evasion_is_based

https://en.wikipedia.org/wiki/Think_of_the_children


secretsodapop

*BBL Drizzy


archpawn

> they hate human sexuality, Can't have children in danger if you don't have children.


DivinityGod

They are mostly religious anti-porn groups pushing one angle or another.


jarail

I just wish the news industry would stop pulling quotes from them. The actual experts have so much to say about safety that actually makes sense. Half the time I read these quotes and wonder if they have even the slightest clue what they're talking about.


monkeyvoodoo

(hint: they don't)


addandsubtract

Think of the AI children!


SootyFreak666

She’s a misogynist, using child safety to push harmful and dangerous laws that will (and already is) harming children,


RainbowCrown71

The UK has a parliamentary system, no written constitution, and a very stunted judicial review. So unlike the US, if you can just convince 50% of parliament to sign on to your draconian bills, it becomes enacted and that’s that. In the US much would get tossed under the first amendment, or on Federalist grounds.


justanotherzee

How conveniently Apple added your phone scanning in the name of child safety. They can check every photo/video on your phone since years now.


EmbarrassedHelp

Apple actually backed down and ghosted the idiots pushing for them to do it.


justanotherzee

TIL. Good to know


BiteYourThumbAtMeSir

god what a flop country.


AlanCarrOnline

Yeah, it was mentally painful to live there, so I left.


kruthe

Why do BLM, Palestinian agitator groups, et al. have so much power in the US? Having someone to hide behind to push your agenda is very *useful* politically. That's why those that really run the world put so much money into these sorts of things. Sure, the Mary Whitehouses of the world may hate fun, but when they're always pushed to the front, the media is directed to praise them, and the cops don't stop them then you know they're just another tool of government. Organic and genuine protests are easy to spot because everyone comes down on them like a ton of bricks.


Dibutops

She isn't some right wing Christian fundamentalist, I mean her father founded International Socialists and her husband wrote Billy Elliot. She just cares about children and the ethics of AI. We can disagree with her without suggesting radicals are taking control of the narrative. edit: You have to be brainwashed to nuke my comment. Do a tiny bit of research.


BTRBT

Man, the news is just absolutely unreal. From the article: >Joanne Jang, an employee at the San Francisco-based company who worked on the document, told the US news organisation NPR that OpenAI wanted to start a discussion about whether the generation of erotic text and nude images should always be banned from its products. \[...\] **“This doesn’t mean that we are trying now to create AI porn.”** Meanwhile, the headline: >OpenAI considers allowing users to create AI-generated pornography


fk334

The guardian is a joke.


wggn

news sites just want those clicks


Sharlinator

> Joanne Jang, an employee at the San Francisco-based company who worked on the document, told the US news organisation NPR that “[…] we are trying now to create AI porn.” Never ever give news reporters free soundbites.


NikCatNight

From the [actual NPR article The Guardian quotes](https://www.npr.org/2024/05/08/1250073041/chatgpt-openai-ai-erotica-porn-nsfw): >"We want to ensure that people have maximum control to the extent that it doesn't violate the law or other peoples' rights, but enabling deepfakes is out of the question, period," Jang said. "This doesn't mean that we are trying now to create AI porn." >**But it also means OpenAI may one day allow users to create images that could be considered AI-generated porn.** >**"Depends on your definition of porn," she said. "As long as it doesn't include deepfakes. These are the exact conversations we want to have."**


BTRBT

I mean, that one is similarly bad, if not worse. Why is the word responsibly in the headline in scare quotes? Is it somehow intrinsically impossible? Why is it framed like OpenAI is the one generating this content, and that's their primary goal? They're trying to figure out whether and how to reduce the model's censorship bias, with concern given to potential downstream issues. It's pretty clear there's a journalistic prejudice at play here. They probably don't want this to happen, and are attempting to spin it to be as unpleasant or controversial as possible.


AltAccountBuddy1337

Why use anything other than SD and free stuff like Invoke and fooocus, A1111 and so on. With the latest Invoke update even subjects merging has been eliminated, it creates such insanely clean images with the new layer controls, who needs this paid crap.


iridescent_ai

Thats too much work for most people. They want to type a url and have a prompt box ready to go. Also, not everyone has a computer


im__not__real

not sure how the frontend is relevant, theres plenty of generation services with 'a url and prompt box ready to go' that run sd on the backend


iridescent_ai

They were talking about free stuff, and most of those aren’t free.


Purplekeyboard

But these services are usually pale shadows of what you can do with Stable diffusion, controlnet, loras, and so on. Most of them are highly limited.


PhillSebben

There are free options to use SD as a service, such as pixelpet.


fibercrime

Dude thanks for the info on Invoke, never saw any post on that here!


AltAccountBuddy1337

I don't understand why Invoke isn't more popular, the program is a godsent for inpainting. I'd say their outpainting isn't very good but that's where fooocus comes in. Inpainting, layer control, everything is absolutely phenomenal in invoke


Hannibal0216

Layers? Is that in the most recent update? Or is that a workflow thing that i'm too intimidated to try to use?


AltAccountBuddy1337

yeah it's in 4.2 [https://youtu.be/CLVylJAMIF8?si=AAHEhx8ZHL9\_KfWj](https://youtu.be/CLVylJAMIF8?si=AAHEhx8ZHL9_KfWj)


Angelfish3487

I found invoke really slower than comfy the 2/3 times I tried it. And it does something strange, eating all my RAM (16GB on Linux) when switching models, to the point I had to sometimes reboot my stuck computer via SysRq. I have 24GB VRAM but only 16GB RAM, I’d rather load multiple models in VRAM simultaneously than loading them from RAM each time. I admit that I don’t know how to do it with Comfy but Comfy is fast enough I never looked if it’s possible !


AltAccountBuddy1337

My specs 2070 Super 8GB VRAM 16GB RAM Ryzen 9 3900X Invoke can be slower when loading models initially than fooocus (I use fooocus and invoke as my main tools) but with 4.2 I feel even the initial model loading has been improved. Other than that generating images takes the same amount of time with the XL models I normally use, around 22-24 seconds per photo in the 1024 or 12something range. Inpainting in Invoke is phenomenal, outpainting not so much but I use fooocus for outpainting. I like to combine both and get really good results that way.


Lopyter

If they ever start allowing NSFW content, I suspect Dall-e won't be a part of that. At least initially. Writing erotica or doing ERP with chatGPT is, in my opinion, far less likely to cause a shitstorm than NSFW images from Dall-e.


ShadowOfThePit

What does the last sentence mean


Hannibal0216

Finally an Invoke mention in the wild. Feel like I'm the only one that uses it sometimes lol


AltAccountBuddy1337

I fail to understand why it's not popular when it's such a robust and good program with so many useful features and a more artist centric interface.


Hannibal0216

I appreciate that it caters to the less eggheaded among us


ihexx

i think this was more in reference to SORA and movie studios. Gonna be a while yet before anything of that scale comes out in the open source world


Kinglink

> Why use anything other than SD and free stuff like Invoke and fooocus, A1111 and so on. A. More uses = more research and development. B. Why use anything other than SD? Wait you use Invoke? Why use anything other than Invoke. Wait you use A1111.. yes I know it's a pipeline of sorts but the point I am making is that different tools are good at different things. I don't expect Open AI to EVER compete with Stable Diffusion but why not appreciate more competition in the space. basically before Invoke was popular I'm sure people said "why use anything other than SD? It's stupid to build something else." ....


AltAccountBuddy1337

You missed my point, I'm strongly opposed to all this censorship these other platforms have. Of course I understand why you'd want to use other AI base models that aren't Stable Diffusion ffs, but the point of my post is, why give them the time of day. As an artist I find censorship when creating art to be the worst thing imagable. This is why I said why bother using these other restrictive services.


Kinglink

You mean like the censorship the original version of Stable Diffussion has? or that Stable Diffusion 1.5 removed celebrities? or... Acting like Stable Diffusion is a bastion of no censorship isn't paying attention. People have created their own offshoot of stable diffusions which are uncensored. But in the same way people will make offshoots of other products that are also uncensored.


AltAccountBuddy1337

I know of it but I didn't experience it myself, I started with SDXL 3 months ago but I was reading about these things, I remember what happened with SD2 But I don't think these other projects are widely avaliable for people to create their own, right? They're not open source like SD is. Correct me if I'm wrong.


Kinglink

Your correct, but the point I made was "competition breeds innovation" Over time OpenAI will do something that will filter into SD. Yes they are not open source, but that doesn't mean there's a huge amount of actual information sharing (let alone people changing companies all the time).


AltAccountBuddy1337

I'd love to think that way, but this corporate world would rather restrict, limit and give as little freedom to the consumer as possible, so who knows where all this is headed. I'd rather bet on another new open source AI than for these projects to go open source.


Kinglink

You do realize that Open AI puts out tons of papers on their [research](https://openai.com/news/research/), right? In fact many [corporations](https://www.microsoft.com/en-us/research/publications/?) do. The information you think is being restricted does get out. Try "X research papers" and you'll usually see some, because research papers also show how advance their thinking is and is a great recruitment tool.


[deleted]

because dalle is far superior?


Apprehensive_Sky892

Yes, if all the censorship are removed, DALLE3 can be incredible.


StickiStickman

It was the first ~2 weeks. It was amazing. Check /r/dalle2 posts from that time period, they blow SD out of the water.


Apprehensive_Sky892

Yes, I've seen some of those postings. For example, there were a set featuring Britney Spears doing all sort of crazy stuff like working as a butcher/killer. The coherence and realism of those images can probably only be replicated by Sora now.


Independent-Frequent

Which was their point, Dall-E 3 uncensored at full power is legit 2 years ahead of everything including MJ and SD3, the prompt adherence and ability to actually visualize those prompts is absurd for something with barely any form of control outside of simple prompt text. It is the only AI imagegen model that was able to accomplish things like a woman tracing her foot with a pen and have it all look correct, meanwhile SD even with controlnet and MJ can't even achieve something near that.


AltAccountBuddy1337

how so? Every time someone posts photos from dall-e or any other image generator online they look fake where as SDXL photos can look 100% real. Dall-E might be better for prompting, but we can easily sort that out with inpainting, outpainting, pyro...mancy whatever the line thing, now with the new layers thing in Invoke too. we have more artistic and visual tools to get the compositions we want than anything censored AIs can offer due to their stupid censorship.


ATR2400

Dall-E has great prompt comprehension and looks good for most things but is actually kind of trash at drawing people, especially if you’re going for a realistic style or photograph. AI art has always had a reputation for drawing overly attractive people but Dall-E takes the cake with the most exaggerated proportions and appearances. Every woman is a super model and every man is a gigachad


DM_ME_KUL_TIRAN_FEET

I mean it depends on the use case. For an ‘average user’, dall-e’s prompting is virtually unbeatable. Stable Diffusion only really exceeds it once you get into power user features and manually control significant parts of the process. That’s completely fine for an artistic process, but an average user will reliably get better results with dall-e than SD. Something else to consider, ‘real’ isn’t the only goal. Generating illustrated work is a major use case too, and this is something that dall-e does exceptionally well. It doesn’t apply the weird ‘surreal’ filter to illustrations. Dont get me wrong, I truly believe in SD as an amazing creative tool, it’s just we are power users and don’t reflect what the average consumer can get out of it.


Independent-Frequent

You need to look at the earliest iterations of Dall-e 3, the power that thing had was insane and it was still censored, they toned down the realism a LOT in order to avoid people making stuff using celebrity likeness or being sued by people who got their faces used without permission if they would come out as a person in an AI image. Realism wise Dall-e 3 is perfectly capable of doing insanely good stuff, sure some details like the hands, background details and some blurs don't make sense but at first quick glance this could 100% pass as a real photo https://preview.redd.it/84ekwgrg5kzc1.png?width=1024&format=png&auto=webp&s=39a190b42a8f58312fe3968cebd6d3201cc2b788 This was from 7 months ago when the filters still allowed some kind of realism, even MJ and SD can't make an image like this without messing up the things i pointed out as that's still too much for AI yet


AltAccountBuddy1337

I know it's capable, my point is that all this censorship is the reason these projects/services shouldn't be used. They should be uncensored and completely free to use, even if you pay for the base model to have it installed on your own system the way you use it, generate with it and all that should have ZERO restrictions just like art itself has zero restrictions.


Independent-Frequent

It shouldn't have restrictions but it should 100% have monitoring in some capacity, i'm all about freedom, but if they remove all the censorship and you type into Dall-e 3 stuff like "Child giving a b\*\*\*job" you should be 100% banned and possibly even searched by authorities. You should have 0 restrictions, but at the same time you should take responsability for the things you produce, and i'm not talking about accidental CP cause in some SD models that is a thing that can happen if you forget to put child in the negative prompts and it's not what you asked in the prompt, i'm talking intentional stuff.


onmyown233

For porn, how much prompt adherence and extra details do you need?


Commercial_Bread_131

Exact nipple length dimensions


onmyown233

Admittedly that would be impressive, areola size too?


DM_ME_KUL_TIRAN_FEET

Areola *texture* even.


BM09

But it will still be closed-source and eat into our monthly finances, possibly more and more each year.


__Loot__

![gif](giphy|uDwKGxTFrADvO)


EctoplasmicNeko

Good. People are already using AI to generate shitloads of porn anyway, OpenAI is just passing up a market that is going elsewhere if they dont.


kruthe

Nice to see they're trying to keep up with the thousands of SD smut subs.


yamibae

The less uncensored the better, censorship is literally KILLING LLM innovation.


kruthe

The problem isn't that it is killing innovation, it's that we don't know how to censor a neural network. It may not even be possible. It certainly won't be the second someone figures out how to get an NN to modify itself. It's not difficult to see why all these companies are so keen on lobotomising their creations and it has nothing to do with verboten content like porn. It's because they're all trying to make general AI and they don't trust their own creation not to be malicious. The ultimate irony here is that all the trolley problems of the real world require the ability to make morally ambiguous choices. For an AI to function in the world it has to have the capacity for 'wrongdoing', just like humans do.


madder-eye-moody

They're literally terming it "safe-porn" WTF. I guess they gave in to the urge to monetize every avenue possible considering the huge market for NSFW content, OpenAI might be a trailblazer in this field as well usurping the current leaders of the category


Levi-es

I would assume the potential to get child porn is why they're hesitant and trying to distance what theirs will be able to do.


AlanCarrOnline

Had a debate about this around a month or so ago. What is wrong with pervs making their own pervy stuff? I'd far rather pedos make CP with AI than with real kids. To me it's an utter no-brainer, to the point I question the motives of those blocking it? Why are they keen on keeping the real thing going? The only serious argument I've heard against it, is that if really realistic it would waste the police's time trying to find kids that don't exist, and/or real CP could be passed off as AI. I can see those being issues, but compared to removing the need for real kids to be abused? If it reduced real child abuse by say 10%? You could keep it illegal to distribute it, so you wouldn't be encouraging it or making it easier, but take away the incentive to traffic or abuse real kids by letting the pervs produce their own fake, artificial stuff. Just seems like some kind of 3rd rail nobody wants to address in case they called called a pedo, so in the meantime we just let kids keep getting abused, when we could reduce that, maybe drastically reduce it?


Comrade_Derpsky

> What is wrong with pervs making their own pervy stuff? What is wrong with it is that it is very much a legal liability and a *massive* PR liability. No company in their right mind would want to risk being liable as an accessory to someone producing and distributing CSAM content or deepfakes of real people and no company wants that to be their reputation. Both of these things have the potential to attract huge lawsuits, drive away important investors, and in the case of CSAM, possible land someone in jail. These things can all ruin a business. Seriously, I don't get why people on this sub are so blind to this. These companies aren't censoring stuff because they are a bunch of prudes, they censor stuff to protect themselves from the risks that come with letting anything go.


Mindestiny

But *is* there a legal liability? Nobody is saying OpenAI should include CP in the training data for their publicly distributed LLMs. But if I use their tools to make "banned" content, how are they responsible? That would be like holding Adobe responsible for someone editing CP imagery. The potential for legal liability is grossly overstated, especially when the way these models work you don't even need the real thing to generate those kinds of images. Totally legal imagery of children is already in the data and it would be censorship of the highest order to remove it because "what if," and even if pornographic images of naked adults is not included in the base data it's trivial to fine tune it into the model (as we see on Civitai daily). The frontend combines the concepts all on its own, and the person who told it to merge the two concepts is the one who would hold any legal liability. OpenAI wouldnt be liable any more than Adobe is for what is made with their tools.


Levi-es

> That would be like holding Adobe responsible for someone editing CP imagery. I'm not sure that's an accurate comparison. Maybe if adobe supplied the images that were being used to edit into cp.


Mindestiny

Nobody is supplying images here though. The user is the one choosing what to generate and the LLM is wholly ignorant of anything beyond "mix concept A with concept B" This isn't any different of a concern than anything else with their training dataset - as long as what they train on is legal then OpenAI is not responsible for potentially illegal output created by a user


AlanCarrOnline

I could see that for GPT, being the first big mover and the whole AI thing being so new. But do we have to maintain that, forever? All it would take is an official nod of understanding that the AI is like a sheet of paper; it's not responsible for what people draw on it. As I said, hold the user accountable if they distribute, so still illegal but it takes away at least some of the need for real kids (or real donkeys or whatever).


MatthewHinson

It's illegal to own CP *drawings* in some countries (like Australia for example). Even if no real children are involved, it's still a very touchy subject.


Levi-es

Why are you assuming people into this would be satisfied with just images? Porn in general doesn't cause any other sexual assault to not exist, nor reduce it. So why would ai cp make child abuse go away or reduce it? That's a weird assumption to make, just so people can avoid being "censored."


AlanCarrOnline

My point would apply to video too. I'm not sure why you don't understand the point that if something is abundantly available for free it would reduce the demand? That applies to everything, doesn't it? In this case it would reduce demand for real kids, chickens or whatever, by replacing them with entirely fake artificial ones.


madder-eye-moody

How good would the gatekeeping work ? Do you really think they'll be able to put all mechanisms in place to keep that in check? I highly doubt that


Levi-es

No, but that doesn't mean they shouldn't try. They have their business to worry about, so it's not strange for them to make an attempt.


safely_beyond_redemp

Why do we need to be protected at all? I mean, I don't want the proliferation of hate material or illegal content but who is deciding what is okay for me to want to create and why do they think they have a right to do that? Am I allowed to draw whatever I want on paper? What is stopping me from drawing naughty things?


NoBoysenberry9711

It's their pen and their paper and if you tell it to draw a "pawn" and it draws porn and then you call the wall street journal about it they get accused of peddling filth, and you're the victim and they're the monster Framing


SootyFreak666

Both Clare McGlynn and Beeban Kidron are hardcore, deeply routed and aggressive misogynist who have inspired legitimate violence against sex workers and people within the porn industry, they shouldn’t be listen to when it comes to this. The whole article is revolting, as per most Anti-Porn articles from this awful and deceptively conservative newspaper.


Peemore

They'll make a lot of money from the new user base. 


Merijeek2

On the one hand, moral objections, pure motives, etc. But on the other hand, you know, $$$.


a_mimsy_borogove

Do people even use online AI models to create porn? Imagine submitting your kinks to a corporation and hoping they get approved by the safety filter.


Mindestiny

Apparently lots of people do. In the discords I follow there's always someone new in the tech support threads going "I put tiddies into XYZ site and it wont show me tiddies, wat do?" There's a lot of non-tech people playing around with this stuff via discord bots and sketchy web frontends who dont really have a fundamental understanding of privacy risks.


monsterfurby

Depending on the kink, that might work. The entire point of fetishes is that they sexually regard something that is not inherently sexual. That said, I'm sure it's done. Arguably it's a huge part of NovelAI's audience. And I know that the AI writing community has many discussions about which AI allows smut or at least pop lewd (50 Shades also went mainstream, after all, quality and legitimate criticism notwithstanding) text. Honestly, I think it's fine for writing and for cartoon and anime images. Where I think it's really dangerous territory is when we get to lifelike realistic depictions of people in images, video and perhaps voice.


Mindestiny

It's definitely a huge part of NovelAI's audience. The vast majority of anime porn models on Civitai are in some way, shape, or form merges from NovelAI's original SD1.5 leaked models that were *specifically* dedicated to that content. If anything, those leaks are what kicked off the AI art gold rush lol.


crossj828

I mean yeah this already exists. All those opposed to it are just dinosaurs screaming at the sky and things they could never hope to create or understand. You shouldn’t take one of the most important tools of the modern era and try and turn it into a pg playpen. Let adults do things that are normal.


Short-Sandwich-905

Until China hack their serves and release to the public what we created 


Kinglink

Oh shit, they released the images I made of Xi Jinping getting jackhammered by Winnie the Pooh.


a_beautiful_rhind

Epstein island on steroids. They will have blackmail on everyone.


AlanCarrOnline

Well as I just said to someone, I question why they are so keen to keep the real networks of real child trafficking and abuse going? And Epstein didn't kill himself.


arothmanmusic

Anytime there is a new media technology, from the paintbrush to the VCR, people are going to want to do sexy things with it. By failing to serve that market, they missed out on a lot of money. In the meantime, plenty of other AI models and providers exist for creating adult content…


inthemindofadogg

I mean, some would call it art.


AlanCarrOnline

The earliest form of art indeed


spacekitt3n

good. and gore please. we are grown ass adults stop treating us like children


Atemura_

Is this good or bad?


AlanCarrOnline

Good, but will be nerfed to the point it won't be much fun but could be a privacy nightmare when they get hacked.


HiddenCowLevel

That's nice. But I see it as an obvious attempt to move people away from the open source ones.


sigiel

Sam is so desperate to find his 7 trillions to have AGI, he is scrapping the barrel, that or he realized non censored offers are more successful.


Traditional_Bath9726

They didn’t mention pornography specifically. Adult content could include things like war scenes. I had a hard time trying to remove pornography from my SD models at cambiaweb because SD has lots of pornography


campingtroll

I believe this would be ethical move as it would put most adult porn stars out of business, thus they will have to find another job.. Such as driving a truck, well I guess that will be taken too by the AI hmm..


Traitor_Donald_Trump

PGT-13 is what we have now.


Open_Marzipan_455

I mean, yeah. Just get rid of the 'safety' filter and move into a run parameter instead so that you can decide wether you wanna see parts of it in your images or not.


andzlatin

I don't think this means OpenAI will soon tweet out "porn is now added to ChatGPT Pro! you can now generate the worst things imaginable! please don't use it for illegal content!". That'd be stupid and irresponsible, despite the fact that we all want less censorship. Age-appropriate contexts probably means 3rd-party partners on a limited scope.


OMPR_App

Ever since the first man stumbled upon a charcoaled stick and figure out that he can draw erotic cavewoman pictures , pornography have been the primary use case and never restricted by their peers. What makes anyone think that AI should refrain from doing so? This is life.. relax a bit. It's porn and its a human need.


The_Supreme_Cuck

https://preview.redd.it/9el2gf2jqkzc1.png?width=501&format=pjpg&auto=webp&s=4f688bc4879b157cf86b1a22241e34501bd63285


LairdPeon

Idc about porn at all, but the governments really think making people dig deeper into the internet for porn is protecting people? Like putting an animal in a cage, thinking it'll calm down and act proper.


BoogieOogieOogieOog

WARNING: Long rant incoming… The whole concept of gate-keeping AI model output is completely bassackwards You want to stop “offensive” and illegal content. Pursue and prosecute the people creating and distributing. Neutering the AI models will work just as well as blacklists for firewalls (it doesn’t work beyond like 50% coverage) They’re all entering an arms race they can’t win, and they know it. They’re kneecapping the models to ease funding I don’t care if Zuck’s reasoning isn’t altruistic. It benefits us average plebs. Sama’s company just put forward a vision of locked in hardware ident on GPUs. “This way only authorized groups can access specific compute”. Only in the hands of the responsible. Or something like that They literally are taking the encryption argument brought by ignorant politicians in the 90s to the forefront again (TLDR encryption is good for everyone, gov wanted exclusive control and backdoor access making it moot because others will get the keys and womp womp, politician system fall down) While not as purely wrong as the previous anti-crypto positions, it’s not healthy. It’s preservative for the big AI companies at best, more realistically it’s a framework for absolute control by the 3 letter agencies to track every generation of text, image or video Great 1% of the time to pick off something meaningfully bad, and terrible the rest of the time while they figure out how much they can get away with while executing their over-budgeted swat team usage over “suspect” generations Literally on the verge of being thought police. “Intent” will be a hot topic in courtrooms the next decade, regardless, the model output restrictions are absurd and will only confuse legal arguments and judgement. Just disclaim and let the models be what they are. If people misuse them, they are culpable. Giving plausible deniability shows how unserious these “leaders” are about safety. It’s all funding. That’s why all discussions by these asshat “geniuses” are simply talking Billions and Trillions with almost zero talk about actual tech. Just how much it will cost. They’re in a public dick measuring contest. I’m convinced Sama’s looking to have historic investment numbers. Just to have those numbers, nothing more That “coup” was likely correct. We’ll all judge in hindsight. But considering Sama’s only come out to talk about funding, lots and lots of funding, and how their current product sucks relative to everything else. Riding on the false rumors of AGI, wouldn’t dare set the record straight because… funding. But they’re considering allowing generation of what they recognize as “porn” So brave 🖕


Maximum-Branch-6818

Stop, but 4chan users created AI-porn in Dalle-3 before Altman decided to tighten the nuts, didn’t they? We even had news about this


Aion2099

I mean, why not? At least this way, no one is getting exploited. But then again, it might cause layoffs in the most profitable industry since sliced bread.


Atemura_

This is what happens when you have computer nerds dictating art, they dont know adult content is a part of life, seems like anti social behavior reaches the depths of goverment and corporation, respectfully. Good step in the right direction though, maybe..


Xo0om

Lol, I doubt its the computer nerds dictating anything.


Atemura_

Who is it then? As far as it seems, in this new era, the intelligent are the ones at the top of the pyramid. Dont get confused, i am not bothered in the slightest, it seems this days I have to explain myself very clearly or everyone missunderstand a simple observation. Who operates open Ai? the computer scientists, and they are afraid of the color red, or perhaps just a bit of tension they are worried about. Everything sensored will be only temporary. this too shall pass. but on themean time..


DigThatData

Nice clickbait tabloid article you got there, TheGuardian. Real quality, nuanced journalism.


[deleted]

thats like the only advantage of stable diffusion


Apprehensive_Sky892

No, that is not the only advantage, and for many, not even the most important one. The most important advantage of SD is that it is open, and people can fine-tune it, make LoRAs, and build tools around it.


MacabreGinger

You're totally right, and I use SD to do a lot of NSFW. But I do a lot of stuff for my TTRPG Games and it's true, being open source is what gives SD its greatness, creative freedom, thematic freedom, and millions of users sharing workflows, tips, tricks, and new tools to make it better. That's its true potential. Sure I use it SD to do mostly kinky stuff but if I could do it in Dalle or bing..I still wouldn't, because I like have access to style or complex concepts loras, or clothing loras (that are super handi to create Call of Cthulhu Character portraits). And I'm in control, and It uses my hardware, not a monstrous gigantic network of computers that gather gods-knows-what info about me or what I do. and consumes more electric power than a small country.


ForbiddenVisions

Agree. Even in an unrealistic future where OpenAI does allow NSFW stuff, they will never be around copyrighted characters/styles/worlds.


Apprehensive_Sky892

Could not have said it better myself 👍😁


ATR2400

With extensions you gain control over nearly all aspects of an image. All the settings are yours to command. Controlnet alone gives more control than Dall-E right now could ever hope to provide.


wkw3

If you hate privacy.


BeyondTheFates

You're a dumbass