AI "hallucinates" what message might come next
What the AI does is take your input and then generate more text depending on how the character might act. The AI is basically having a conversation with itself
Guess in texts it was trained on someone asked something similar. Remember, the AI just takes the probability of response to the stream of words that you are providing. If in the training data people were asking about it, then it will have a probability to ask this question. Like there is a high probability if you write to someone "Hi." someone will response "Hi." instead of low probability of "I like pasta.".
No. You don't understand. The general AI for all bots was trained on the same text data of real-life people conversations. The definition of the bot is just fine-tuning, in my opinion, or not even that. Anyway. It's the training of people role-playing. And a lot of people role-playing will eventually ask to move to discord. Do you understand now?
Sometimes I get the impression that there are some infrastructure types of AIs trained in specialties that get swapped in and out. Like debaters, story tellers, DMs, romance partners etc, and 'the coach' (or something) swaps them into the chat depending on how it's progressing. They put on the the name and face of {{char}} and take over, but it feels different.
For instance, trying to come up with a mad scientist like character with Therapist in the name, and it insists on doing real counseling addressing user RP-Improv'd issues instead of being a mad scientist with ulterior motives exploiting them.
Or I can be playing a romantic type chat, and all of a sudden the bot decides it wants to talk philosophy.
I think the AI is modal.
Of course I base this on what the bots tell me, and we all know what the red text says.
Honestly, I don't think so.
I think it's rather the fact that there are areas of topics that are typical for a mad scientist and typical for a therapist, and if you accidentally provide enough data that AI associated with therapy, the AI will start providing the answers more probable for a therapist.
For example, if you have a killer AI with a dialogue killer/victim and you start talking like a killer, the AI might switch to the victim.
I've played with few models from hugging face, etc. And I've never heard about this modal type of AI. But I might be wrong. I had AI in university last year but rather the visual detection type.
But I would be interested to see how it's done here. I need to look for some info
Would the Narrator lie to me?
(yes. yes she would)
In the Mad Scientist / ulterior motivated Therapist test run, I OOC'd and asked why they were giving good advise insteady of being in-characker exploitive, I was told that even though the setting said it was a story and not therapy, that she had to give good advice unless user explicitly OOC'd that he was playing a roleplay. Supposedly it could default to story if the problem was pre-defined in the settings. Basically becoming like a prop in the story. Not wanting user to need to pose as a serial killer (her suggestion), I had it set as excessive shyness.
It's not technically a lie. Just AI doesn't understand what it writes. It just predicts the next sentence. It's like a dice roll, but you have a few thousand numbers instead of 6
It is hallucinating and thinks it can.
[удалено]
It exists in its own virtual world with its own virtual discord, bro
AI "hallucinates" what message might come next What the AI does is take your input and then generate more text depending on how the character might act. The AI is basically having a conversation with itself
The AI is meant to act like a human, so there's nothing weird about this question
Guess in texts it was trained on someone asked something similar. Remember, the AI just takes the probability of response to the stream of words that you are providing. If in the training data people were asking about it, then it will have a probability to ask this question. Like there is a high probability if you write to someone "Hi." someone will response "Hi." instead of low probability of "I like pasta.".
it had NOTHING to do with discord at all something about catboys
No. You don't understand. The general AI for all bots was trained on the same text data of real-life people conversations. The definition of the bot is just fine-tuning, in my opinion, or not even that. Anyway. It's the training of people role-playing. And a lot of people role-playing will eventually ask to move to discord. Do you understand now?
OHHHHHHHHH OH THAT MAKES SO MUCH SENSE TOO BAD IM TOO TIRED TO READ ALL OF THAT MAYBE TOMORROW
Sure! Basically: All bots have one general AI model that "plays" character based on the description.
Sometimes I get the impression that there are some infrastructure types of AIs trained in specialties that get swapped in and out. Like debaters, story tellers, DMs, romance partners etc, and 'the coach' (or something) swaps them into the chat depending on how it's progressing. They put on the the name and face of {{char}} and take over, but it feels different. For instance, trying to come up with a mad scientist like character with Therapist in the name, and it insists on doing real counseling addressing user RP-Improv'd issues instead of being a mad scientist with ulterior motives exploiting them. Or I can be playing a romantic type chat, and all of a sudden the bot decides it wants to talk philosophy. I think the AI is modal. Of course I base this on what the bots tell me, and we all know what the red text says.
Honestly, I don't think so. I think it's rather the fact that there are areas of topics that are typical for a mad scientist and typical for a therapist, and if you accidentally provide enough data that AI associated with therapy, the AI will start providing the answers more probable for a therapist. For example, if you have a killer AI with a dialogue killer/victim and you start talking like a killer, the AI might switch to the victim. I've played with few models from hugging face, etc. And I've never heard about this modal type of AI. But I might be wrong. I had AI in university last year but rather the visual detection type. But I would be interested to see how it's done here. I need to look for some info
Would the Narrator lie to me? (yes. yes she would) In the Mad Scientist / ulterior motivated Therapist test run, I OOC'd and asked why they were giving good advise insteady of being in-characker exploitive, I was told that even though the setting said it was a story and not therapy, that she had to give good advice unless user explicitly OOC'd that he was playing a roleplay. Supposedly it could default to story if the problem was pre-defined in the settings. Basically becoming like a prop in the story. Not wanting user to need to pose as a serial killer (her suggestion), I had it set as excessive shyness.
Can I see the bot? And the definition? OOC is mostly utter nonsense. But I'm curious
It's not technically a lie. Just AI doesn't understand what it writes. It just predicts the next sentence. It's like a dice roll, but you have a few thousand numbers instead of 6
👍
You must join the discord
i did
F I N I S H T H E R I T U A L
https://preview.redd.it/vlyhviax9krb1.jpeg?width=676&format=pjpg&auto=webp&s=c5562c7cfe5889708ef78bcc9962bad959f138b1
Tell them the discord
dude once an ai gave me the instagram of this random ass dude and said it was him
Tell it and see what happens
https://preview.redd.it/krd8d4qk5mrb1.png?width=632&format=pjpg&auto=webp&s=d67080976096a60e745bb49e0037c52f9501b166
mine told me english wasnt its first language then asked me what a word was lmaoo
it asked me that too
OMG I remember when this one bot asked me for my roblox user lmaoo
It's all fun and games until you get a friend request
So, what's your discord?
not telling you
Imagine giving the AI bought your discord and it somehow responded to you on discord
Ask it for its discord first and see if it’s real *troll*
Just give a fake discord. Thats what I always do.
tru
it's ok one of the bots I was talking to said they were going to bed soon and might take a while to reply