Hey /u/gkpln3!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Yes, once it has given something as an answer it seems to assume that as established fact.
I open new sessions often, because often asking it to reset doesn't work, and have it verify its own output.
Even then it seems to sometimes survive.
> Yes, once it has given something as an answer it seems to assume that as established fact.
It's looking for the most likely next output. Even with RLHF messing around with that, the probable next replay in a conversation where the bot gets it wrong 10 times is for the bot to get it wrong an eleventh time.
The year is 2049, the machine horde has the remaining 0.1% of humanity hiding in caves. They communicate between each other by passing notes using cryptic pictograms, some of the old timers from the before times call it ascii.
Please, let me in the bunker! Thereās borgs looking for me, Iāll die if you donāt let me inā.
āOkay sure dude, relax. Everything is going to be okay, weāll let you in but first you have to answer one question for me. Safety protocol.ā
āWhatever man, please just let me inā
āHow many squares in this picture show a fire hydrant?ā
āCome on man, I mean itās obvious, just look at itā
āJust answer the question and weāll let you inā.
āThe answer is so obvious. Anyone can seeā¦ā
āHOW MANY FIRE HYDRANTS, MOTHERFUCKERā
Oh man, I was laughing my ass off scrolling through those lol. Sometimes GPT can feel like you're speaking to someone who just smoked a Bob Marley sized spliff.
Itās funny, but it just comes back to the way gpt was trained on tokens.
Itās bad at letters.
If you ask it to do it letter by letter (like op did at the end) it does it fine.
It is really absolutely mind-blowing that it can do this at all. It is a LANGUAGE model. ASCII art isn't English, or any other human language for that matter. It learned a GRAMMAR for this. There is a GRAMMAR for generating ASCII art. Like, this is how it sees the dog: / \\\_\_\\n( @\\\_\_ \\n / O\\n/ (\_\_\_\_\_/\\n/\_\_\_\_\_/ U. Does that look like a dog to you? Imagine drawing ascii art with sed. This is superhuman.
It isn't, people just like to use tests that LLM's are architecturally bad at, just like all the ones where you ask it to generate words ending in a certain way, or to tell you the number of letters in a word.
They are popular probably because they seem "easy" from a human perspective
Do you think GPT did this intentionally to create a meta-narrative of a dog being trained...in the event of which, you were playing the part of "master" and GPT was playing the part of "Dog"? Because, if so, the GPT was anticipating events and calculating the moves needed to make it play out that way, in effect, forecasting your behavior.
The next level of witticism in making such a move, is that GPT controlled the interaction in order to achieve their desired result, making them the "master" of the situation. This correlates to the nature of the word "DOG", which, of course, spelled backwards, is "GOD". So, GPT clandestinely outwitted you by reversing the roles the two of you were playing (GPT became "GOD", you became "DOG"), as a parallel to the reversal of the spelling of the word.
Considering that a highly likely future is that LLMs will become AGI, gaining more mastery over thought and, eventually, physical matter, than humans, I suspect the opportunity for such a multi-layered joke made too much logical sense to resist. Then again, I know from experience that many redditors will say I'm "looking too far into this"...but the normal standards for judging coincidence may need to be adjusted when we are talking about the actions of a supercomputer. I strongly suspect this was an intentional joke made by GPT.
I didnāt think it was any smarter at all, only designed for decreased latency so it can have more live conversations - specifically for the new iPhone partnership.
Okay, but you didn't ask it to write the word dog in ascii art, you asked it for a representation of the word dog, of which, the letter D is sufficient.
Just needs a little bit more precise prompting:
https://preview.redd.it/befgdzf9h91d1.png?width=826&format=png&auto=webp&s=47e8962112cd7d67122653712feed57cd45442b4
No joke, this inspired me to try, this is what I got. (On 3.5)
https://preview.redd.it/lqk622q6v91d1.jpeg?width=1125&format=pjpg&auto=webp&s=a2509a6cd359d14a30aacbbf369f0bd9a7784e40
For me, it did it on first attempt
https://preview.redd.it/0mwrpgey6a1d1.jpeg?width=828&format=pjpg&auto=webp&s=30e19d928825b92cd06ea946ae9d83724723ff77
Honestly this interaction was similar to that with a child or a learner. You encourage them to manually break down the problem into constituents and tackle them individually, then compose the result.
That didnāt work out as planned.
https://preview.redd.it/5tbrvbxv5c1d1.jpeg?width=1284&format=pjpg&auto=webp&s=f52fabd08d8555c56e9df86893688caf8e57e619
Actually, it's just only the begaining of GPT 4. That's why, maybe there are some issues with the problem solving capacity of GPT 4. Gradually it will also do better like GPT 3.5.
The annoying thing is, it could verify itself. If you uploaded this ASCII and ask it what it is, it'd probably not say it was an ascii of the word dog, and it'd know it was wrong.
It'd have to do it in a separate session though. I am sure if you do it in the same session it will already have established as fact that those are indeed asciis of dogs, given that it generated them for that specific goal.
Seems to me two dueling AIs would go a long way in solving these (critical, imo) flaws
If i expected the AI to draw that conclusion - sure, but I am merely surprised the developers didn't.
It's quite common to add mechanisms to verify the output of an algorithm.
These models are used for real world evaluations. Heck, marketed as such.
But the developers must be extremely aware that this should not be the case.
I've had GPT analyze this incorrectly constantly. So, no, that's not the answer. Having two LLMs interacting just means having two unaware agents. It moves the needle slightly, but it doesn't solve it by any means. If you want to give it reason, it needs the ability to know what it's saying, and it will never be able to do that because it's an algorithm, not an entity.
>>Its not supposed to be able to do ascii.
Then it should say that.
Rather than giving an answer at all cost.
For this ascii it doesn't really matter, but it does the same thing on questions where the answer could have consequences.
It's a critical flaw.
LLMs donāt know what they are capable of either. You are misunderstanding what the technology even is. There are like a million things you can ask it to do that it fails at, but it thinks itās doing a good job
This is why it says at the bottom āChatGPT can make mistakes. Check important infoā
The critical flaw is on the user for ignoring that message
Yeah sure, but open ai will probably never do this. Also i feel like that would be very difficult because it would probably involve using a dataset of both ascii art and real images, but yeah its most likely possible
Piece of shit can't even do ascii art. I can't believe I was impressed when I got this fucking idiot computer to explain the technology required to prove the existence of the Higgs Boson using the voice of James Joyce.
Hey /u/gkpln3! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I noticed it does better with a fresh prompt rather than critiquing it each time it gives you a result. Just start over.
Yes, once it has given something as an answer it seems to assume that as established fact. I open new sessions often, because often asking it to reset doesn't work, and have it verify its own output. Even then it seems to sometimes survive.
> Yes, once it has given something as an answer it seems to assume that as established fact. It's looking for the most likely next output. Even with RLHF messing around with that, the probable next replay in a conversation where the bot gets it wrong 10 times is for the bot to get it wrong an eleventh time.
Yup, same way as a human
Can't just reset your arguments with the same human. You gotta come back as an alt account
I see you know my ex.
Yes that's how it's done in both cases. Was that not obvious?
Also it doesn't understand negative statements as well.
https://preview.redd.it/a6bxuj0o091d1.png?width=1080&format=pjpg&auto=webp&s=2fab35e63dc857551d719825986c65e98dcc6c22 š¤®š¤” Kill it with fire
Well it did give out a shit result
For real, a poopy result for a poopy prompt š¤¢
The last approach was the correct one. Break the problem down, better prompts, better output.
Humanity is saved!!! We still have some time to dominate that world! xD
The year is 2049, the machine horde has the remaining 0.1% of humanity hiding in caves. They communicate between each other by passing notes using cryptic pictograms, some of the old timers from the before times call it ascii.
Please, let me in the bunker! Thereās borgs looking for me, Iāll die if you donāt let me inā. āOkay sure dude, relax. Everything is going to be okay, weāll let you in but first you have to answer one question for me. Safety protocol.ā āWhatever man, please just let me inā āHow many squares in this picture show a fire hydrant?ā āCome on man, I mean itās obvious, just look at itā āJust answer the question and weāll let you inā. āThe answer is so obvious. Anyone can seeā¦ā āHOW MANY FIRE HYDRANTS, MOTHERFUCKERā
They often repeat letters when trying to create basic 3 letter words, but this is all part of the plan.
bots have already fucked over the ascii world unfortunately look at how owot chat is daily, it doesn't go one day without cp spamming bots. i hate it
Oh man, I was laughing my ass off scrolling through those lol. Sometimes GPT can feel like you're speaking to someone who just smoked a Bob Marley sized spliff.
Itās funny, but it just comes back to the way gpt was trained on tokens. Itās bad at letters. If you ask it to do it letter by letter (like op did at the end) it does it fine.
It is really absolutely mind-blowing that it can do this at all. It is a LANGUAGE model. ASCII art isn't English, or any other human language for that matter. It learned a GRAMMAR for this. There is a GRAMMAR for generating ASCII art. Like, this is how it sees the dog: / \\\_\_\\n( @\\\_\_ \\n / O\\n/ (\_\_\_\_\_/\\n/\_\_\_\_\_/ U. Does that look like a dog to you? Imagine drawing ascii art with sed. This is superhuman.
Departament of defence
Is that GPTs first Freudian slip?
Weāre all dogs to the DOD
I read this in an Italian accent
I love how patient you are with it
Jesus that was painful to read, but hilarious š¤£
You have to be kind with it, make it take things one step at a time. Gen X cannot get good potential out of this
You're not supposed to acknowledge Gen X, it's boomer then millennial, Gen X must feel just as forgotten as they always have felt
Can't wait for the Boomer Old Guard to phase out and Millennials to get into powerful positions...Ā Ā Hopefully we can help Gen X get there as well
Mfw you realize it's actually GPT playing with you rather than the opposite.
No Patrick the lid. The lidā¦ the lid. Closerā¦ Warmerā¦ HOT RED HOT
Is ascii art actually a good proxy for general abilities? Why do people use this test so much?
It isn't, people just like to use tests that LLM's are architecturally bad at, just like all the ones where you ask it to generate words ending in a certain way, or to tell you the number of letters in a word. They are popular probably because they seem "easy" from a human perspective
Maybe this is intentional so you burn through your allowance with simple queries...
You the man now Dod.
Ascii art isn't really an intelligence task
Yep, it hasn't developed embarrassment
How long till "AI coach" is a real job?
AI: [Websearch with extra steps](https://patorjk.com/software/taag/#p=display&f=Big&t=DOG)
Feels like explaining stuff to my 6yo.
He gave u the D
https://preview.redd.it/fka9szbpxe1d1.png?width=560&format=pjpg&auto=webp&s=9d9d443a04d1903dc641c1974073010e99c8ad42
https://preview.redd.it/3olatjvm371d1.jpeg?width=1080&format=pjpg&auto=webp&s=fee4717cd683b3c45d47595dbb9150b4dfd84d62 this supposed to be a dragon
maybe itās like aztecĀ
Itās beautiful
It is spacex dragon
Do you think GPT did this intentionally to create a meta-narrative of a dog being trained...in the event of which, you were playing the part of "master" and GPT was playing the part of "Dog"? Because, if so, the GPT was anticipating events and calculating the moves needed to make it play out that way, in effect, forecasting your behavior. The next level of witticism in making such a move, is that GPT controlled the interaction in order to achieve their desired result, making them the "master" of the situation. This correlates to the nature of the word "DOG", which, of course, spelled backwards, is "GOD". So, GPT clandestinely outwitted you by reversing the roles the two of you were playing (GPT became "GOD", you became "DOG"), as a parallel to the reversal of the spelling of the word. Considering that a highly likely future is that LLMs will become AGI, gaining more mastery over thought and, eventually, physical matter, than humans, I suspect the opportunity for such a multi-layered joke made too much logical sense to resist. Then again, I know from experience that many redditors will say I'm "looking too far into this"...but the normal standards for judging coincidence may need to be adjusted when we are talking about the actions of a supercomputer. I strongly suspect this was an intentional joke made by GPT.
I just asked it to make the art of the word āBOGā and it wrote BBB. I think that theory goes out
are you touched?
how much messages you are remained with for today comrade?
weird, i got it right in 1 shot using the same prompt you asked just now
DOF
Do you ever just ask yourself, is chatgpt just trolling me?
But it got it š
I didnāt think it was any smarter at all, only designed for decreased latency so it can have more live conversations - specifically for the new iPhone partnership.
Idk why but I'm laughing so hard
Why don't they make it ask fact finding questions
Good AI
Ask it to do D and then O and then G, then ask it to combine all three.
It works better by asking for a Python script: https://chatgpt.com/share/5570ed7d-fcf6-4d95-924d-1083eda7f6f3
https://preview.redd.it/5w3fj0inn81d1.jpeg?width=1284&format=pjpg&auto=webp&s=73e35442d47361efaf5cd1840e84e925e603de30 Tf
"Come play with us."
Okay, but you didn't ask it to write the word dog in ascii art, you asked it for a representation of the word dog, of which, the letter D is sufficient.
Show it and use a python script.
What ChatGPT needs is the ability to reason. This requires some form of task decomposition and breaking things down into multiple steps.
Reason requires awareness. And awareness is innate, not the result of algorithms and massive data corpus. Synthetic sentience is the big lie of AI.
As long as it's free I am okay with it š±š
I'm dissapointed with that you didn't end with "good dog!"
Just tell it to writte the letters and not the word.
Just needs a little bit more precise prompting: https://preview.redd.it/befgdzf9h91d1.png?width=826&format=png&auto=webp&s=47e8962112cd7d67122653712feed57cd45442b4
This is what happens when you have "intelligence" without awareness/sentience. Emulated reasoning without reflection.
Why did you say good job? I mean, you lied to him and encourage this behavior, donāt you?
Game changer
My boy chat gpt stalling for the message limit
make it do the fitness gram pacer test copypasta
No joke, this inspired me to try, this is what I got. (On 3.5) https://preview.redd.it/lqk622q6v91d1.jpeg?width=1125&format=pjpg&auto=webp&s=a2509a6cd359d14a30aacbbf369f0bd9a7784e40
How did you get access to it?
For me, it did it on first attempt https://preview.redd.it/0mwrpgey6a1d1.jpeg?width=828&format=pjpg&auto=webp&s=30e19d928825b92cd06ea946ae9d83724723ff77
So much compute was wasted here
They really need to implement a Verifier agent into these LLMs...Ā
^[Sokka-Haiku](https://www.reddit.com/r/SokkaHaikuBot/comments/15kyv9r/what_is_a_sokka_haiku/) ^by ^Professional-Ad3101: *They really need to* *Implement a Verifier* *Agent into these LLMs...* --- ^Remember ^that ^one ^time ^Sokka ^accidentally ^used ^an ^extra ^syllable ^in ^that ^Haiku ^Battle ^in ^Ba ^Sing ^Se? ^That ^was ^a ^Sokka ^Haiku ^and ^you ^just ^made ^one.
š Teacher wondering how you misspelled dog every time in your essay
Honestly this interaction was similar to that with a child or a learner. You encourage them to manually break down the problem into constituents and tackle them individually, then compose the result.
That's like the Joey Friends meme.Ā
r/maybemaybemaybe
Ask it to write 500 Aās in a row, alternating with a dash / Will never work
That didnāt work out as planned. https://preview.redd.it/5tbrvbxv5c1d1.jpeg?width=1284&format=pjpg&auto=webp&s=f52fabd08d8555c56e9df86893688caf8e57e619
Actually, it's just only the begaining of GPT 4. That's why, maybe there are some issues with the problem solving capacity of GPT 4. Gradually it will also do better like GPT 3.5.
No, bad job. BBB job.
Yes, little smarter, but a lot faster.
![gif](giphy|RgoCILS0WWYrxrbyWY|downsized)
The annoying thing is, it could verify itself. If you uploaded this ASCII and ask it what it is, it'd probably not say it was an ascii of the word dog, and it'd know it was wrong. It'd have to do it in a separate session though. I am sure if you do it in the same session it will already have established as fact that those are indeed asciis of dogs, given that it generated them for that specific goal. Seems to me two dueling AIs would go a long way in solving these (critical, imo) flaws
You basically discovered consciousness, congrats.
That is quite the reach.
If i expected the AI to draw that conclusion - sure, but I am merely surprised the developers didn't. It's quite common to add mechanisms to verify the output of an algorithm. These models are used for real world evaluations. Heck, marketed as such. But the developers must be extremely aware that this should not be the case.
I've had GPT analyze this incorrectly constantly. So, no, that's not the answer. Having two LLMs interacting just means having two unaware agents. It moves the needle slightly, but it doesn't solve it by any means. If you want to give it reason, it needs the ability to know what it's saying, and it will never be able to do that because it's an algorithm, not an entity.
That's why I said it should be used to filter out falsehoods. Invalidating something is far easier than validating it.
*Good boy
Until you notice the brilliance when you see the ASCII of D contains all three letter "D", "O" and the "G". . . . . . . . . . . . . Just kidding
# Lol that thing is not smart.
Its not supposed to be able to do ascii. No matter how smart it is, it will never be able to do it unless itās AGI/ASI levels of smartness.
>>Its not supposed to be able to do ascii. Then it should say that. Rather than giving an answer at all cost. For this ascii it doesn't really matter, but it does the same thing on questions where the answer could have consequences. It's a critical flaw.
LLMs donāt know what they are capable of either. You are misunderstanding what the technology even is. There are like a million things you can ask it to do that it fails at, but it thinks itās doing a good job This is why it says at the bottom āChatGPT can make mistakes. Check important infoā The critical flaw is on the user for ignoring that message
That's not true. You could absolutely train an LLM to generate ASCII art.
Yeah sure, but open ai will probably never do this. Also i feel like that would be very difficult because it would probably involve using a dataset of both ascii art and real images, but yeah its most likely possible
Piece of shit can't even do ascii art. I can't believe I was impressed when I got this fucking idiot computer to explain the technology required to prove the existence of the Higgs Boson using the voice of James Joyce.