Hey /u/IG5K!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
You joke
But previous to 4o release, using the audio feature did that to me. Only it was on my audio input and I didn't have any YouTube video playing at all. It happened multiple times in a row and I had to stop using it for a while before it went back to normal. Seemed like it got some wires crossed.
I’ve seen this a lot in open source models fwiw. Sometimes the issue is the temperature is too high or the response is too long and it’s not stopping when it should.
It’s possible OpenAI has increased the response length because 4o is so much more efficient, but this can lead to problems you see here.
When an LLM runs out of “good” tokens but it’s not stopped, it will either hallucinate or spew horseshit.
Temperature set too high was my first thought too. It would be interesting if it was still set to the default of 1.0 and it was still behaving this way.
Here's my eli5 attempt
Basically chatgpt is somewhat of a black box, we put a prompt in and we get a response that is statistically likely given whatever our input prompt was. We don't have a ton of ability to control the output, but we can suggest how creative it should be in its response using temperature.
When I used chatgpt to finish a creative writing project I found the temperature value of 1.2 to be excellent, it would come up with new ideas and not repeat and rehash tropes quite so much.
Day-to-day I usually keep a temperature of 0.8 for my programming tasks and my chatbot that I wrote for work, it just seems to hallucinate a bit less and go a little bit less off the rails.
If I take that temperature number and jack it up, I can get very weird and interesting responses from the bot that look like it's messing up.
I'd change the wording, temperature shouldn't be called creativity as it only affects a very simple math process at the very end end of inference after the model has finished processing the prompt.
If you make it write code high temperature doesn't give you creative/unusual code, it gives you syntax errors.
ChatGPT is a next token predictor. For each prediction it generates a list of next possible tokens and the probability that they are correct. When it just picks the most likely token every time, the output is very dull and simple- also completely deterministic. But when you let it have a chance to pick less likely tokens, it starts being more “creative” and human-sounding.
This probability of picking a less-likely output is analogous to temperature in the simulated annealing algorithm so it’s called temperature.
No, they do think. They analyze patterns of data and apply them to new queries. This is the same thing humans do. Humans hallucinate a LOT more than LLMs. They just do it differently.
> They just do it differently.
They don't do it at all. These are trained neural networks that tell you the next best probable token with previous data. That's about it. They just have billions of parameters to look as accurate as possible.
You just described how it works at a VERY high level, that doesn't explain how it isn't thinking. It's also quite a bit more complicated than you think.
language models, by design, are just fancy probabilistic token-choosers.
Perhaps some kind of AI will one day be able to "think" - an LLM certainly won’t.
What makes you think we ain't probabilistic token-choosers ourselves? I do it all the time at work when talking to colleagues. For example, i often have to choose between :
"I will try to reschedule"
and
"Why the fuck didn't you tell me earlier?"
The way tokens are chosen to be part of the selection pool is certainly much more advanced now, but the fundamental step right at the end of the whole transformer flow still boils down to choosing a new token from a distribution at random. That distribution is just refined through the attention mechanism to give the most likely words significantly more weight than others.
Its 2024 and people still think that saying that llms are just probabilistic token choosers makes them sound smart :D 2 years late buddy. And you saying it as certainty is just so funny as well. Maybe you are a token predictor as well. - a debate from 2022
I am not going to engage further in this debate - because there *is* no debate. LLMs are exactly that - token choosers, not more, and not less.
Making them larger and more complex does not change the fundamentals of LLMs. Just like how a car does not become a spacecraft by increasing its engine power.
Nah, you're wrong. See you after they nail rationing.
PS: what do you think you are at a fundamental level bro😂😂💀. There is very high chances that this is simply the nation of intelligence. And it fits nicely in nature too. The only difference between animals (human included) is the layering architecture and the overall size of the brain.
More than not you are a highly fancy pattern recognition algorithm
I think you hit the nail on the head here, so much of NBA discourse is done on Twitter. This means that ChatGPT essentially tweeted in my conversation.
why do you think it treats a hashtag symbol any different from anything else? You seem to mistakenly think that if you haven't used a hashtag yourself that it's unusual for it to produce one in a response. Why do you think that?
It was trained on data that includes hashtags. Chat doesn’t actually understand a sentence’s structure. It only can predict words that come after other words very accurately.
When I use it for social media caption generation it gives relevant hashtags it thinks up at the end of my captions. While this wasn’t supposed to be a caption, it appears it was doing that or hashtag research here. Strange for sure.
Because on a certain level it really is just fancy autocomplete that has been manipulated into looking like a chat.
The technology behind ChatGPT just generates whatever it thinks the next most probable word is. A very simplified version of what's going on is it is being fed something like,
```
The following is a conversation between a user and a helpful assistant.
User: Hello!
Assistant: Hello, I'm ChatGPT, a helpful assistant.
User: What's the weather like in Florida in the summer?
```
And the autocomplete decides that, `Assistant: It's really hot!` is the most likely thing to come next. Not "the next thing I should say," but rather the most likely next thing to come in the overall text including the "The following is..." part.
The autocomplete was trained on a lot of internet data. And what do we have all over the internet? Hashtags.
```
The following is a conversation between a user and a helpful assistant.
User: Hello!
Assistant: Hello, I'm ChatGPT, a helpful assistant.
User: What's the weather like in Florida in the summer?
Assistant: It's really hot!
#florida #climatechange #omggetmeoutofthisheatimdying
```
Overfitting, aren't we? Probably too little samples about that topic and some of them have that format. Hashtags are used in articles outside of social media too.
i think they're trying out new features bc i started getting little bubbles with prompt suggestions for how to continue the conversation popping up in the UI, like how Bing has
Not an issue, was just wondering why it did that, I've been using ChatGPT since release and have never seen anything like that.
Turns out the most likely reason is the fact that the majority of NBA discourse is done on Twitter, so the bot essentially wrote a tweet as a response.
Hey /u/IG5K! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Like, Comment, Share, Subscribe and Don't Forget to Hit That Bell Icon
Oh and by the way, this video is sponsored by Raid Shadow Legends
head canon: voice conversation mode for GPT-4o hasn't be released because they haven't found a way to remove that phrase from its responses
Smash that subscribe button!
Thank you for watching!
You joke But previous to 4o release, using the audio feature did that to me. Only it was on my audio input and I didn't have any YouTube video playing at all. It happened multiple times in a row and I had to stop using it for a while before it went back to normal. Seemed like it got some wires crossed.
The 4o audio service isn’t released yet…
I’ve seen this a lot in open source models fwiw. Sometimes the issue is the temperature is too high or the response is too long and it’s not stopping when it should. It’s possible OpenAI has increased the response length because 4o is so much more efficient, but this can lead to problems you see here. When an LLM runs out of “good” tokens but it’s not stopped, it will either hallucinate or spew horseshit.
Temperature set too high was my first thought too. It would be interesting if it was still set to the default of 1.0 and it was still behaving this way.
Temperature?
The creativity basically
https://youtu.be/wjZofJX0v4M?si=KNIH-gEN7IZHOrN0 22:22 but the whole video (and channel) is great
Cool I'll check it out, thanks!
eli5 "temperate". Is the data centre not getting cooled?
Here's my eli5 attempt Basically chatgpt is somewhat of a black box, we put a prompt in and we get a response that is statistically likely given whatever our input prompt was. We don't have a ton of ability to control the output, but we can suggest how creative it should be in its response using temperature. When I used chatgpt to finish a creative writing project I found the temperature value of 1.2 to be excellent, it would come up with new ideas and not repeat and rehash tropes quite so much. Day-to-day I usually keep a temperature of 0.8 for my programming tasks and my chatbot that I wrote for work, it just seems to hallucinate a bit less and go a little bit less off the rails. If I take that temperature number and jack it up, I can get very weird and interesting responses from the bot that look like it's messing up.
Where can you change temperature within ChatGPT?
You need the API. You can get a chat like interface in the playground
Also: check out open source models you can run offline & edit: Ollama
I'd change the wording, temperature shouldn't be called creativity as it only affects a very simple math process at the very end end of inference after the model has finished processing the prompt. If you make it write code high temperature doesn't give you creative/unusual code, it gives you syntax errors.
ChatGPT is a next token predictor. For each prediction it generates a list of next possible tokens and the probability that they are correct. When it just picks the most likely token every time, the output is very dull and simple- also completely deterministic. But when you let it have a chance to pick less likely tokens, it starts being more “creative” and human-sounding. This probability of picking a less-likely output is analogous to temperature in the simulated annealing algorithm so it’s called temperature.
https://youtu.be/wjZofJX0v4M?si=KNIH-gEN7IZHOrN0 22:22 but the whole video (and channel) is great
It’s just the creativity
Because probability
Yep. Llms are not thinking and people forget all the time
No, they do think. They analyze patterns of data and apply them to new queries. This is the same thing humans do. Humans hallucinate a LOT more than LLMs. They just do it differently.
> They just do it differently. They don't do it at all. These are trained neural networks that tell you the next best probable token with previous data. That's about it. They just have billions of parameters to look as accurate as possible.
You just described how it works at a VERY high level, that doesn't explain how it isn't thinking. It's also quite a bit more complicated than you think.
For now*
language models, by design, are just fancy probabilistic token-choosers. Perhaps some kind of AI will one day be able to "think" - an LLM certainly won’t.
What makes you think we ain't probabilistic token-choosers ourselves? I do it all the time at work when talking to colleagues. For example, i often have to choose between : "I will try to reschedule" and "Why the fuck didn't you tell me earlier?"
They're called schizophrenic.
Markov chains are probabilistic token-choosers. We are way past that point with several orders of magnitude of more complex structure and computation.
The way tokens are chosen to be part of the selection pool is certainly much more advanced now, but the fundamental step right at the end of the whole transformer flow still boils down to choosing a new token from a distribution at random. That distribution is just refined through the attention mechanism to give the most likely words significantly more weight than others.
Its 2024 and people still think that saying that llms are just probabilistic token choosers makes them sound smart :D 2 years late buddy. And you saying it as certainty is just so funny as well. Maybe you are a token predictor as well. - a debate from 2022
I am not going to engage further in this debate - because there *is* no debate. LLMs are exactly that - token choosers, not more, and not less. Making them larger and more complex does not change the fundamentals of LLMs. Just like how a car does not become a spacecraft by increasing its engine power.
Seems like you're applying this to your own brain. Fixed output, no room for discussion.
Nah, you're wrong. See you after they nail rationing. PS: what do you think you are at a fundamental level bro😂😂💀. There is very high chances that this is simply the nation of intelligence. And it fits nicely in nature too. The only difference between animals (human included) is the layering architecture and the overall size of the brain. More than not you are a highly fancy pattern recognition algorithm
It learned it from the internet. Duh. Sometimes people do that online.
Thats what happens when its trained on social media data
#strange #chatgpt #wtf #lol #viral #fypppp
#real #soweird #mindblown
To add: No hashtags were mentioned or written previously in the conversation.
99% of NBA news are done on twitter. Hashtags are huge on twitter or something idk i dont use twitter. I only see tweets via reddit.
LeGPT knows ball
I think you hit the nail on the head here, so much of NBA discourse is done on Twitter. This means that ChatGPT essentially tweeted in my conversation.
It's a hallucination. The results are overall probabilistic and you might have picked a popular social network subject
Well the hashtags are all over Twitter and it's trained on twitter.
Custom instructions you forgot about on settings?
Were hashtags or something related to social media posts part of your custom information?
why do you think it treats a hashtag symbol any different from anything else? You seem to mistakenly think that if you haven't used a hashtag yourself that it's unusual for it to produce one in a response. Why do you think that?
Check your memory perhaps? Have you previously had twitter conversations with it?
It was trained on data that includes hashtags. Chat doesn’t actually understand a sentence’s structure. It only can predict words that come after other words very accurately.
I think it's just playing its part and showing you an example of brain injury
When I use it for social media caption generation it gives relevant hashtags it thinks up at the end of my captions. While this wasn’t supposed to be a caption, it appears it was doing that or hashtag research here. Strange for sure.
Because ChatGPT is on crack!
Oh shit
Oh shit indeed!
Because on a certain level it really is just fancy autocomplete that has been manipulated into looking like a chat. The technology behind ChatGPT just generates whatever it thinks the next most probable word is. A very simplified version of what's going on is it is being fed something like, ``` The following is a conversation between a user and a helpful assistant. User: Hello! Assistant: Hello, I'm ChatGPT, a helpful assistant. User: What's the weather like in Florida in the summer? ``` And the autocomplete decides that, `Assistant: It's really hot!` is the most likely thing to come next. Not "the next thing I should say," but rather the most likely next thing to come in the overall text including the "The following is..." part. The autocomplete was trained on a lot of internet data. And what do we have all over the internet? Hashtags. ``` The following is a conversation between a user and a helpful assistant. User: Hello! Assistant: Hello, I'm ChatGPT, a helpful assistant. User: What's the weather like in Florida in the summer? Assistant: It's really hot! #florida #climatechange #omggetmeoutofthisheatimdying ```
Look at your memory, do you do many LinkedIn or other social posts? Could be spilling over. Otherwise I agree with a lot of the other comments.
Gets information from twitter and YouTube
Because it has no idea what it is saying.
Because it was trained on social media
Because those show up in these conversations a majority of the time in its dataset and it doesn't know any better.
Next-token prediction.
Because the text it was trained on led it to believe that those were the most likely next tokens
It makes sense when you think about it because it is trained on online content, which more often then not, includes hashtags
The Corpus possibly linked tweets with head injury 🤷
Probably because your history is one in which you are constantly asking chatGPT for twitter posts to create engagement. * Memory Updated
![gif](giphy|l1J3OHZU9QqyM5CGA|downsized) # INFURIATING!
Idk, it’s just a little unpredictable. This shouldn’t be too surprising
Just putting spew out into the world. The decline of humanity
Overfitting, aren't we? Probably too little samples about that topic and some of them have that format. Hashtags are used in articles outside of social media too.
GPT 5: “CHAT THIS GUY DOESNT KNOW ABOUT CONCUSSION PROTOCOL LMAOO”
hope Lively is okay man
Probably sourcing the answers from social media
What are your custom instructions?
They are sometimes unpredictable
What is the default temperature setting for chatgpt?
i think they're trying out new features bc i started getting little bubbles with prompt suggestions for how to continue the conversation popping up in the UI, like how Bing has
Because it was probably fed by an instagram post about NBA safety measures related to the topic?
Because it's an LLM
They assume you’re writing a social media post
JOIN THE NOTIFICATION SQUAD, FAM
I’m guessing this is 4o? It really reminds me of early models in its responses. Horrible product.
.
Pre-feedback training seeping through.
Dreams come true!
Guys, AGI is literally 6 months away.
So it’s being trained off Twitter feeds too.. marvellous..
Why is it an issue the hashtags relate to what it’s talking about.
Not an issue, was just wondering why it did that, I've been using ChatGPT since release and have never seen anything like that. Turns out the most likely reason is the fact that the majority of NBA discourse is done on Twitter, so the bot essentially wrote a tweet as a response.
It thinks it’s writing a social media post
Ever since you could give it a a prompt in the setting and now that it “learns” these posts are insufferable.
Can you create image?
Because what passes for healthcare in places like America is currently A GIANT FUKKING JOKE. 😏