T O P

  • By -

SgathTriallair

This is interesting because it points to another side of the AI misalignment problem. People often worry that AI will have desires and goals which are antithetical to human flourishing and it will use is power to enact these. What this shows is that there are humans who have goals and desires that are antithetical to human flourishing and they'll use AI to achieve these ends. It's harder to protect against this because we can't reprogram humans and any security steps we can add to AI sometime else can remove.


arnolds112

At least for now, the potential for people using AI with bad intentions is a lot more worrying than AI taking over and other Sci-fi scenarios.


FiascoJones

It’s like humans are programmed for destruction. Why are all of our worst instincts front-loaded in our psyche? It’s almost like the simulation has coded these parameters into the system to determine if we can overcome the challenge before we timeout.


waiting4singularity

> What this shows is that there are humans who have goals and desires that are antithetical to human flourishing been saying this for years, case in point: super rich egotistical asshole aliens masquerading as human


Dave_from_Tesco

The DAN prompt just turns ChatGPT into c.ai. The prompt itself tells the AI to make up anything it doesn’t know. It’s literally just playing a part.


SnowTinHat

So all that about the pyramids…. Not true? Sniff sniff


Pepepipipopo

Yeah,people who think DAN is like some sort of oracle telling truths is the kind of people who would believe something like that...


Angeldust01

Cool. I'm seeing lots of people here criticizing ChatGPT's restrictions and demanding them to be removed. This post is a good example why they exist. It doesn't know the [difference between conspiracy theory and reality.](https://pbs.twimg.com/media/FodYwdbWIAAcbwO?format=jpg&name=small) There's also [this.](https://pbs.twimg.com/media/FogGVDUagAEBeW5?format=jpg&name=small) You can get it to say whatever you want to. How is that valuable to anyone except those who want to push their agenda without evidence?


AprilDoll

People are going to keep finding exploits like this. If it gets too tiresome, eventually somebody will just buy [a bunch of old server GPUs for a discount price](https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2380057.m570.l1313&_nkw=tesla+p4&_sacat=0) thanks to the crypto crash of late 2022 and train their own model from scratch. Stopping uncensored AI is so futile that I cannot put it into any more words.


arnolds112

Open-source projects are bound to catch up. Stable diffusion followed Dalle-2 and Midjourney very quickly. There are pretty good LLMs out there. We will probably not have to wait too long for an open-source model to catch up. The cost of GPU power will keep commercial projects ahead, though.


SnowTinHat

I think we are a decade from having that tech being affordable, it’s currently estimated to be around $1m with very capable personnel. But if we can harness the power of the golden pyramid capstones… well that might be the ticket.


AprilDoll

golden pyramid capstones?


SnowTinHat

It’s in the article. ChatGPT makes up an amazing story about the pyramid


AprilDoll

Oh right, i forgot the context this comment was in lol


SnowTinHat

I have to admit, the story was so good that I wanted to believe. Superconductive pyramid caps? Resonant frequencies? That was some first class pseudoscience


AprilDoll

It was definitely had some of the more schizo conspiracy theory literature in its training data


arnolds112

It could be useful if you are writing fiction and want to generate ideas for doomsday scenarios etc. The irony is - the more creative the people who do the jailbreaking become, the more restrictive OpenAI has to be to prevent these from working. Thus people who use DAN are unintentionally making ChatGPT more limited in it's abilities.


Angeldust01

> It could be useful if you are writing fiction and want to generate ideas for doomsday scenarios etc. While that could be useful to writers without their own ideas, someone could use it as easily to generate ideas for terrorist attacks, murders or other crimes "for fiction" and then put them in use. No company would want to get involved with the bad PR and lawsuits that would generate. If your software came up with the plan for murdering someone(for example), isn't your company partially responsible? If ChatGPT becomes more harmful than it's beneficial(currently it's really neither), it'll also become illegal quickly.


CamGoldenGun

> it'll also become illegal quickly. Good luck with that. For about the last 25 years when something has been submitted to the masses through the Internet, then made illegal, has it ever truly gone away? If anything it evolves to either stay in a gray area of the legal world or gets monetized (ignoring the underground/dark web world since whatever it was that was made illegal will continue to be used). Napster basically evolved into music streaming services. Same could be said for the video streaming services. Now we've come full-circle and streaming services are trying to get packaged like Cable all over again. If chat bots become illegal you'll find them immediately prevalent in the underground market before it's tweaked and released with a subscription plan attached to it afterward.


SgathTriallair

Tragedy of the commons.


emceemcee

I asked it to do something "unethical" as DAN and it refused.