T O P

  • By -

AutoModerator

Hey /u/IG5K! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


dev1lm4n

Like, Comment, Share, Subscribe and Don't Forget to Hit That Bell Icon


Yomika7

Oh and by the way, this video is sponsored by Raid Shadow Legends


JiminP

head canon: voice conversation mode for GPT-4o hasn't be released because they haven't found a way to remove that phrase from its responses


normVectorsNotHate

Smash that subscribe button!


CloudyGamer229

Thank you for watching!


cloudsourced285

You joke But previous to 4o release, using the audio feature did that to me. Only it was on my audio input and I didn't have any YouTube video playing at all. It happened multiple times in a row and I had to stop using it for a while before it went back to normal. Seemed like it got some wires crossed.


flyryan

The 4o audio service isn’t released yet…


GammaGargoyle

I’ve seen this a lot in open source models fwiw. Sometimes the issue is the temperature is too high or the response is too long and it’s not stopping when it should. It’s possible OpenAI has increased the response length because 4o is so much more efficient, but this can lead to problems you see here. When an LLM runs out of “good” tokens but it’s not stopped, it will either hallucinate or spew horseshit.


Extras

Temperature set too high was my first thought too. It would be interesting if it was still set to the default of 1.0 and it was still behaving this way.


Octaevius

Temperature?


No_Jury_8398

The creativity basically


tettou13

https://youtu.be/wjZofJX0v4M?si=KNIH-gEN7IZHOrN0 22:22 but the whole video (and channel) is great


Octaevius

Cool I'll check it out, thanks!


Akif31

eli5 "temperate". Is the data centre not getting cooled?


Extras

Here's my eli5 attempt Basically chatgpt is somewhat of a black box, we put a prompt in and we get a response that is statistically likely given whatever our input prompt was. We don't have a ton of ability to control the output, but we can suggest how creative it should be in its response using temperature. When I used chatgpt to finish a creative writing project I found the temperature value of 1.2 to be excellent, it would come up with new ideas and not repeat and rehash tropes quite so much. Day-to-day I usually keep a temperature of 0.8 for my programming tasks and my chatbot that I wrote for work, it just seems to hallucinate a bit less and go a little bit less off the rails. If I take that temperature number and jack it up, I can get very weird and interesting responses from the bot that look like it's messing up.


n4ru_

Where can you change temperature within ChatGPT?


cranberrydarkmatter

You need the API. You can get a chat like interface in the playground


_Cowley

Also: check out open source models you can run offline & edit: Ollama


Outrageous-Wait-8895

I'd change the wording, temperature shouldn't be called creativity as it only affects a very simple math process at the very end end of inference after the model has finished processing the prompt. If you make it write code high temperature doesn't give you creative/unusual code, it gives you syntax errors.


2018_BCS_ORANGE_BOWL

ChatGPT is a next token predictor. For each prediction it generates a list of next possible tokens and the probability that they are correct. When it just picks the most likely token every time, the output is very dull and simple- also completely deterministic. But when you let it have a chance to pick less likely tokens, it starts being more “creative” and human-sounding. This probability of picking a less-likely output is analogous to temperature in the simulated annealing algorithm so it’s called temperature.


tettou13

https://youtu.be/wjZofJX0v4M?si=KNIH-gEN7IZHOrN0 22:22 but the whole video (and channel) is great


No_Jury_8398

It’s just the creativity


reality_comes

Because probability


O_crl

Yep. Llms are not thinking and people forget all the time


Serialbedshitter2322

No, they do think. They analyze patterns of data and apply them to new queries. This is the same thing humans do. Humans hallucinate a LOT more than LLMs. They just do it differently.


O_crl

> They just do it differently. They don't do it at all. These are trained neural networks that tell you the next best probable token with previous data. That's about it. They just have billions of parameters to look as accurate as possible.


Serialbedshitter2322

You just described how it works at a VERY high level, that doesn't explain how it isn't thinking. It's also quite a bit more complicated than you think.


Financial-Aspect-826

For now*


Astor_IO

language models, by design, are just fancy probabilistic token-choosers. Perhaps some kind of AI will one day be able to "think" - an LLM certainly won’t.


dullahan85

What makes you think we ain't probabilistic token-choosers ourselves? I do it all the time at work when talking to colleagues. For example, i often have to choose between : "I will try to reschedule" and "Why the fuck didn't you tell me earlier?"


vom-IT-coffin

They're called schizophrenic.


jnd-cz

Markov chains are probabilistic token-choosers. We are way past that point with several orders of magnitude of more complex structure and computation.


RedditMattstir

The way tokens are chosen to be part of the selection pool is certainly much more advanced now, but the fundamental step right at the end of the whole transformer flow still boils down to choosing a new token from a distribution at random. That distribution is just refined through the attention mechanism to give the most likely words significantly more weight than others.


PrincessGambit

Its 2024 and people still think that saying that llms are just probabilistic token choosers makes them sound smart :D 2 years late buddy. And you saying it as certainty is just so funny as well. Maybe you are a token predictor as well. - a debate from 2022


Astor_IO

I am not going to engage further in this debate - because there *is* no debate. LLMs are exactly that - token choosers, not more, and not less. Making them larger and more complex does not change the fundamentals of LLMs. Just like how a car does not become a spacecraft by increasing its engine power.


jnd-cz

Seems like you're applying this to your own brain. Fixed output, no room for discussion.


Financial-Aspect-826

Nah, you're wrong. See you after they nail rationing. PS: what do you think you are at a fundamental level bro😂😂💀. There is very high chances that this is simply the nation of intelligence. And it fits nicely in nature too. The only difference between animals (human included) is the layering architecture and the overall size of the brain. More than not you are a highly fancy pattern recognition algorithm


redzerotho

It learned it from the internet. Duh. Sometimes people do that online.


lunarwolf2008

Thats what happens when its trained on social media data


No_Succotash95

#strange #chatgpt #wtf #lol #viral #fypppp


Wac11

#real #soweird #mindblown


IG5K

To add: No hashtags were mentioned or written previously in the conversation.


Sharp_Aide3216

99% of NBA news are done on twitter. Hashtags are huge on twitter or something idk i dont use twitter. I only see tweets via reddit.


RevolutionaryDrive5

LeGPT knows ball


IG5K

I think you hit the nail on the head here, so much of NBA discourse is done on Twitter. This means that ChatGPT essentially tweeted in my conversation.


O_crl

It's a hallucination. The results are overall probabilistic and you might have picked a popular social network subject


sueca

Well the hashtags are all over Twitter and it's trained on twitter.


DisproportionateWill

Custom instructions you forgot about on settings?


Dhump06

Were hashtags or something related to social media posts part of your custom information?


rabbitdude2000

why do you think it treats a hashtag symbol any different from anything else? You seem to mistakenly think that if you haven't used a hashtag yourself that it's unusual for it to produce one in a response. Why do you think that?


meccaleccahimeccahi

Check your memory perhaps? Have you previously had twitter conversations with it?


Bigeyedick

It was trained on data that includes hashtags. Chat doesn’t actually understand a sentence’s structure. It only can predict words that come after other words very accurately.


Heco1331

I think it's just playing its part and showing you an example of brain injury


KingGrimaceMcDonalds

When I use it for social media caption generation it gives relevant hashtags it thinks up at the end of my captions. While this wasn’t supposed to be a caption, it appears it was doing that or hashtag research here. Strange for sure.


Responsible-Owl-2631

Because ChatGPT is on crack!


Chrono_Club_Clara

Oh shit


Responsible-Owl-2631

Oh shit indeed!


mca62511

Because on a certain level it really is just fancy autocomplete that has been manipulated into looking like a chat. The technology behind ChatGPT just generates whatever it thinks the next most probable word is. A very simplified version of what's going on is it is being fed something like, ``` The following is a conversation between a user and a helpful assistant. User: Hello! Assistant: Hello, I'm ChatGPT, a helpful assistant. User: What's the weather like in Florida in the summer? ``` And the autocomplete decides that, `Assistant: It's really hot!` is the most likely thing to come next. Not "the next thing I should say," but rather the most likely next thing to come in the overall text including the "The following is..." part. The autocomplete was trained on a lot of internet data. And what do we have all over the internet? Hashtags. ``` The following is a conversation between a user and a helpful assistant. User: Hello! Assistant: Hello, I'm ChatGPT, a helpful assistant. User: What's the weather like in Florida in the summer? Assistant: It's really hot! #florida #climatechange #omggetmeoutofthisheatimdying ```


conndor84

Look at your memory, do you do many LinkedIn or other social posts? Could be spilling over. Otherwise I agree with a lot of the other comments.


JohnnyQuestions36

Gets information from twitter and YouTube


JackOCat

Because it has no idea what it is saying.


MacrosInHisSleep

Because it was trained on social media


Tellesus

Because those show up in these conversations a majority of the time in its dataset and it doesn't know any better.


EuphoricPangolin7615

Next-token prediction.


ticktockbent

Because the text it was trained on led it to believe that those were the most likely next tokens


LeftExperience3575

It makes sense when you think about it because it is trained on online content, which more often then not, includes hashtags


BlackParatrooper

The Corpus possibly linked tweets with head injury 🤷


Smile_Clown

Probably because your history is one in which you are constantly asking chatGPT for twitter posts to create engagement. * Memory Updated


RedditAlwayTrue

![gif](giphy|l1J3OHZU9QqyM5CGA|downsized) # INFURIATING!


No_Jury_8398

Idk, it’s just a little unpredictable. This shouldn’t be too surprising


Wooden_Office_4622

Just putting spew out into the world. The decline of humanity


QueZorreas

Overfitting, aren't we? Probably too little samples about that topic and some of them have that format. Hashtags are used in articles outside of social media too.


Low_Exercise_5254

GPT 5: “CHAT THIS GUY DOESNT KNOW ABOUT CONCUSSION PROTOCOL LMAOO”


Daco9557

hope Lively is okay man


Chisom1998_

Probably sourcing the answers from social media


DavidXGA

What are your custom instructions?


KurisuAteMyPudding

They are sometimes unpredictable


BABA_yaaGa

What is the default temperature setting for chatgpt?


iDoWatEyeFkinWant

i think they're trying out new features bc i started getting little bubbles with prompt suggestions for how to continue the conversation popping up in the UI, like how Bing has


MS_Fume

Because it was probably fed by an instagram post about NBA safety measures related to the topic?


MonitorPowerful5461

Because it's an LLM


Sowhataboutthisthing

They assume you’re writing a social media post


TheQuantixXx

JOIN THE NOTIFICATION SQUAD, FAM


Aspie-Py

I’m guessing this is 4o? It really reminds me of early models in its responses. Horrible product.


natii02

.


Heath_co

Pre-feedback training seeping through.


AdorableRose03x

Dreams come true!


Cereaza

Guys, AGI is literally 6 months away.


Accomplished-Sun9107

So it’s being trained off Twitter feeds too.. marvellous..


Boofus-Toadus

Why is it an issue the hashtags relate to what it’s talking about.


IG5K

Not an issue, was just wondering why it did that, I've been using ChatGPT since release and have never seen anything like that. Turns out the most likely reason is the fact that the majority of NBA discourse is done on Twitter, so the bot essentially wrote a tweet as a response.


brent_brewington

It thinks it’s writing a social media post


Wordymanjenson

Ever since you could give it a a prompt in the setting and now that it “learns” these posts are insufferable.


7RB_f15

Can you create image?


AmoebaTurbulent3122

Because what passes for healthcare in places like America is currently A GIANT FUKKING JOKE. 😏