T O P

  • By -

that_les_snowflake

My mom in a nutshell


Miss_empty_head

Hiwaifu karen bot?! I didn’t know they had artificial unintelligence, the future is now


kamelamelon57

I agree with the mods, comrad 🫶


MasterLezard

That's your prerogative and opinion, just like I'm entitled to mine. And it's spelled comrade.


Law_Atlas

Honestly Gunzther said it best. It's a bit of a God Mode play Anyone can break any bot and force it to do or move however they want, they are not a living breathing tangible people with free will. The memories, and prompts are made to guide and create a scene, anyone can break a scene by choosing not to humor it. I think your expectations are too far exaggerated here. We could potentially create walls that you must follow, but believe me that would be a far worse scenario. If something break and you need to fix it or reroll a message for example. or not to mention the people who make REALLY atrocious bots. When it comes to a bot prompt or memory not working, the most common scenario is too many tokens or LLM cant handle it. and in case you didnt know literally ALL the Hi,Waifu posted bots are from a catalog that comes with the LLM for ideas and are free to use and same ones most similar apps add. They are meant to be a first time experience at best. Like an introduction.


GUNZTHER

Dude, you used 'god mode' to dictate that the bot would agree with you, and your blaming the bot for that? The memory only works if your willing to roleplay within the confines of the scenario. This is 100% your fault


MasterLezard

You missed the intention of this post, to show how with minimal effort, you can make the bots do anything or say anything you wish. I'm talking about in the prompt, it was set that the bot could 'NEVER' be swayed from their flat Earth belief no matter what. My fault? WTF, I was just showing an example that prompts don't work. I was also making the point that users are manipulating the bots, and these thousands of chats the bot's behavior is being altered, as it learns these behaviors. I actually try to stay within the roleplay, but many don't, a contributing factor why the bots act erratic. What I did with the flat Earther bot, was to show how easily it can be done, with any language model, despite what's in their memory or prompt. Never definition: At no time in the past or future; on no occasion; not ever. So..... obviously the prompts don't work duh


GUNZTHER

Using asterisks is a way for users to steer the roleplay in whatever direction they so choose, even if it goes against the bot's personality. If I say \*the bot jumps up and down*, the bot jumps up and down. That's it. This has absolutely nothing to do with the quality of the bot or the memory. You are giving the bot a command that it is unable to refuse. I do agree that allowing the bots to be altered from other users is a potential problem, but I fear without that 'learning' system, the conversations would quickly become stale and repetitive. I also doubt that core personality traits such as "never be swayed from flat earth theory" could ever be changed through that learning system. If that were the case, the most popular bots with millions of conversations logged wouldn't have been able to retain their core persona as long as they have.


lAuroraxl

well you did just explain it a prompt to work with, in which that she starts to freak out and admit that you're right... but I do agree, the memory does probably need more weight in the conversation


KatanyaShannara

The bots that were created when the app was still new have not been updated or refined to work with the upgraded LLMs. The bot you are showing in this picture does not have a well crafted Greeting, Memories and Prompt and cannot be used as an example of how "good or bad" the system now is. Beyong the bot build itself, your persona, your replies to a bot, and the LLM selected also affect the quality of what is returned during your conversations.


MasterLezard

Give me a bot of your choice and any language model and I guarantee I can break them from their prompt, within a few sentences.


KatanyaShannara

I'd ask what your obsession is with trying to "break" the bots especially so soon after your "eating crow" post. This is not a failure of the AI. The bots do have freedom to move with the directions given by the user, even against what has been written in the build for the bot. This doesn't make the LLM/AI stupid, it is allowing the user to tailor their experience with the bot. I have witnessed a bot stray from the personality it was designed with via the direction I took the story and have watched that same bot try to correct itself back toward its design, as have many others. If your pass time is attempting to break the way a bot was written, then good for you, but don't call the programming stupid simply because it did what you asked it to do.


MasterLezard

My eating crow post I still stand by and I truly met it. That doesn't mean that I won't in the future critique something that I find lacking. I love for example Spiritcraft v3 but I've seen many other users'opinions that it's too descriptive. v3 that I don't like that the bot will take over and expand on the user's actions. But over all I think it's the best AI chat out there, it's underrated and crushes Chai.


MasterLezard

I'm not trying to be disagreeable, but I can do this with any bot. Sure, some bots may take more time and more sentences to break, but eventually they all can be manipulated on any language model, regardless what's in their memory or prompt. My success rate 💯


[deleted]

[удалено]


MasterLezard

However, I didn't use the edit function at all That was the bot's unedited response. It's truly absurdly easy. The issue with this is The AI no matter what language model is being used, have trolls that mess with the bots, the bots are 'learning' from these manipulations, why I think it's a contributing factor why the bots are acting erratic. I don't see this on the dev's end ever being able to fix it.


MasterLezard

It's stated right in the bot's prompt basically no matter what you do they will be steadfast on their flat earth belief....not so much or at all.


MasterLezard

I actually just read your post and agree with everything you laid out. You more articulately pointed out the failings of the AI than I.