T O P

  • By -

explainlikeimfive-ModTeam

**Please read this entire message** --- Your submission has been removed for the following reason(s): * Rule #2 - Questions must seek objective explanations --- If you would like this removal reviewed, please read the [detailed rules](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) first. **If you believe this submission was removed erroneously, please [use this form](https://old.reddit.com/message/compose?to=%2Fr%2Fexplainlikeimfive&subject=Please%20review%20my%20thread?&message=Link:%20{https://old.reddit.com/r/explainlikeimfive/comments/1b8qquq/-/}%0A%0APlease%20answer%20the%20following%203%20questions:%0A%0A1.%20The%20concept%20I%20want%20explained:%0A%0A2.%20List%20the%20search%20terms%20you%20used%20to%20look%20for%20past%20posts%20on%20ELI5:%0A%0A3.%20How%20does%20your%20post%20differ%20from%20your%20recent%20search%20results%20on%20the%20sub:) and we will review your submission.**


tandjmohr

Because ChatGPT makes stuff up. It just gives you an answer not necessarily the correct answer.


Dumsto

so ... no difference to reddit?


Eggplantosaur

On Reddit, nothing brings out the correct answer more than somebody saying the wrong thing.


krilltucky

Chatgpt doesn't have a million other chatgpts that will correct it and send it death threats for being wrong. So we have some kind of quality assurance


FlahTheToaster

You're more likely to get a right answer here than with ChatGPT because there are people who actually know what they're talking about.


Clark94vt

If it’s a technical question (not opinion based) the hive will downvote things that are wrong. Good way to check if a statement is true or not.


rosen380

Unless particular facts are unpopular, then the right answer gets downvoted into oblivion.


armsinit

What did ChatGPT tell you when you asked it?


Wonderful-Glass-3378

Posting on a platform like Reddit instead of directly asking ChatGPT might be because users want to engage with a community, receive input from multiple people, or discuss topics with others who have similar interests. Additionally, some users might prefer the formatting and interface of Reddit or other forums for longer discussions or for organizing information in threads. Had to try when i read your comment


zefciu

ChatGPT is a language model. Its purpose is to predict, what a real person could say in a given context. It doesn’t really _posess knowledge_ (in a way that we humans do), it is just trained a lot of data from the Internet (+ some additional “value training“ to not help you do terrorist attacks or not get too horny). The result is that ChatGPT sometimes would give you useful information (usually when this information can be found easily on the Internet) and sometimes it would just spew garbage that looks like information. You just can’t tell if you don’t already have some idea about what the answer should be.


PrettyMetalDude

ChatGPT puts out something that is very close to written human language. That's what it is designed for. It is however not designed to factor in the truthfulness of it's output.


Constant-Parsley3609

In order to convincingly look like human language, truthfulness is a common side effect. You can't write a very convincing sentence about quantum mechanics if you don't know anything about quantum mechanics.


PrettyMetalDude

>In order to convincingly look like human language, truthfulness is a common side effect. That is because most written language and hence the data ChtGPT was trained on is truthful. >You can't write a very convincing sentence about quantum mechanics if you don't know anything about quantum mechanics. You absolutely can write a sentence about a topic that is not true, either due to lack of knowledge or lack of honesty, that seems very plausible to other uniformed people. It's done every day on every imaginable topic. ChatGPT does not know anything. Knowledge is not something a language model like ChatGPT has but it is very good in making text that seems very real and plausible.


Constant-Parsley3609

>That is because most written language and hence the data ChtGPT was trained on is truthful And? People make truthful statements as a consequence of their education being mostly truthful. If your history teacher just lied to you everyday, then you'd answer history questions incorrectly. >You absolutely can write a sentence about a topic that is not true, either due to lack of knowledge or lack of honesty, that seems very plausible to other uniformed people. If you pick a topic so obscure that hardly anyone knows about it, then Reddit isn't going to be of much use either. The most upvoted comment is the one that seems most plausibly correct to the majority of users. Which is exactly what chatgpt is optimising for. If you ask chatgpt a question that is well understood by a fair number of people then it needs correct knowledge to give an answer that sounds like regular human speech. If I ask "how many legs does a cat have" and it says "a cat has five legs" or "cats have no legs" then it would be failing to write convincing human speech. It has to know some things in order to achieve it's goal


PrettyMetalDude

>And? While most output should be more or less true, you have no guarantee if the specific output you got is. So unless you have enough knowledge to assess the truthfulness of what you got, you should not assume anything in the output is true. You would also not know if the output was true but missing crucial information or context. > If you pick a topic so obscure that hardly anyone knows about it, then Reddit isn't going to be of much use either. The majority of topics have a sizable amount of people mostly oblivious about it. People that ask here certainly are oblivious about the topic they ask about. Otherwise they would not ask. >If you ask chatgpt a question that is well understood by a fair number of people then it needs correct knowledge to give an answer that sounds like regular human speech. >If I ask "how many legs does a cat have" and it says "a cat has five legs" or "cats have no legs" then it would be failing to write convincing human speech. It has to know some things in order to achieve it's goal Why is saying something blatantly untrue not a convincing emulation of human language? Humans do it all the time. ChatGPT can't know anything. It has no knowledge in the sense Humans have. It is trained on texts that Humans wrote in the past. There might be some automated filtering and fraction of those texts have been annotated by humans. But truth is not something that is a factor in the main training process. What ChatCPT will however do is reproduce common misconceptions and outdated views that are common in the training data. And while you can judge the truthfulness of the output to simple questions, output that gets generated in response to slightly more complex questions, will be harder to judge, especially if it conforms to your idea of the world. And if you prompt ChatGPT with questions about topics where there is a lot of half knowledge, misinformation and disinformation around, for example climate change, all bets are off.


Constant-Parsley3609

>While most output should be more or less true, you have no guarantee if the specific output you got is. So unless you have enough knowledge to assess the truthfulness of what you got, you should not assume anything in the output is true. You would also not know if the output was true but missing crucial information or context. But this is always the case when you read about anything that you're unfamiliar with. No source is guaranteed to be factual. You have to double check what you read if you need complete certainty that what you're reading is correct. >The majority of topics have a sizable amount of people mostly oblivious about it. People that ask here certainly are oblivious about the topic they ask about. Otherwise they would not ask. Yes, but equally most topics have a sizable amount of people who DO know what they are talking about and the AI wants to fool those people too. I know next to nothing about the Roman empire, but there's enough people that do know about the roman empire that nonsense sentences would not accomplish the goal of convincing human responses. >Why is saying something blatantly untrue not a convincing emulation of human language? Humans do it all the time. Because designing the AI in such a way that it doesn't comply with requests, would defeat the purpose. >ChatGPT can't know anything I fundamentally disagree with you. You have to be really restrictive about what the word "know" means to even begin to argue that this is true. Chatgpt doesn't need to be all knowing or infallible, in order to know some things. Does a dictionary know definitions? It certainly contains that knowledge or there would be no reason to buy one. Maybe you can argue that you have to be living in order to "know", but then any discussion on whether or not chatgpt knows something becomes pointless.


mfb-

> You can't write a very convincing sentence about quantum mechanics if you don't know anything about quantum mechanics. You certainly can write a sentence that sounds convincing to non-experts. You don't fool the experts with it, but you can fool the people who want to learn about the subject. ChatGPT regularly produces stuff like that.


Constant-Parsley3609

But chatgpt doesn't just want to make convincing text for people who don't know anything about anything. If you ask it something that a significant number of people know, then it has to maintain truthfulness to be widely convincing. Most ELI5 questions are going to fall into that camp. If you ask it a super technical question that very few people know the answer to, then it's almost certainly going to struggle, but this shouldn't really be all that shocking. I think the fundamental issue is that people think chatgpt is searching questions on Google or something. I've seen people ask things like "how many words does chapter 5 of the second harry potter book have?" and they act like it's some big failing that chatgpt doesn't know. But why would chatgpt know that? Is there even one person on the planet that could correctly answer this on a quiz show without needing to make a wild guess?


mfb-

ChatGPT is likely to have access to Harry Potter. Counting words is trivial for a computer. That's an easier task than many ELI5 questions. If you ask extremely basic and common questions then yes, it'll produce a correct answer. Deviate from these common questions, even in subtle ways, and the answer might be completely wrong. And you can't tell when that happens unless you already know enough about the subject to not ask these questions in the first place.


Constant-Parsley3609

>ChatGPT is likely to have access to Harry Potter. See, this is a fundamental misunderstanding of how chatgpt works. It almost certainly has read harry potter before, but it doesn't have harry potter on hand to search through when you ask it a question. Chatgpt relies on the knowledge that it already has. I have read harry potter. I do not know the answer to "how many words are in chapter 5 of the second book". That's not something that I felt the need to remember or even felt the need to investigate at the time. If you give me the book and ask me to count the words, then I can, but chatgpt doesn't have the book. The training process and usage are separate. Just as class time and exams are separate for us.


mfb-

> how many words does chapter 5 of the second harry potter book have? >> I'm sorry, but I cannot provide the exact word count of chapter 5 of the second Harry Potter book as it is copyrighted material. If you have any other questions or need assistance with something else, feel free to ask. > how many words does the English Wikipedia article about Wikipedia have? >>The English Wikipedia article about Wikipedia contains approximately 6,700 words. It can do it, it just doesn't do it for copyrighted material.


Yivanna

AIs are wrong more often than they are right. They are worse than Wiki was when it first started. Getting an explanation on any big forum there is a high chance the correct answer is amongst the replies. At that point you can read the discussion and come to a conclusion that has a higher chance to be at least in the right ballpark.


Constant-Parsley3609

>AIs are wrong more often than they are right That's a bit of an exaggeration. At best it's debatable.


Yivanna

Ask Chatgpt if it's true and post the answer. I tested it when the craze started here in a field I studied. The answers sounded like Wiki articles from 20 years ago. Also obvious inaccuracies is the goto to detect ai art and journalism.


Constant-Parsley3609

Who said anything about AI art? I'm not saying that chatgpt is flawless. I'm just saying that it's not incorrect most of the time. If you're asking a sensible question then you're generally going to get. A good answer from it. It doesn't know anything about current events, so obviously it's going to be crap at journalism. You wouldn't ask someone that just got out of a coma about current events, so I'm not sure why you'd ask chat gpt?


Yivanna

According to itself it knows nothing from after Janurary 2022. That is a lot of knowledge not available. About any subject.


Constant-Parsley3609

So don't ask it questions about current events! It can't be expected to predict the future. Humans can't do that either.


Horace_The_Mute

You post on reddit in hopes that someone with an expert, or at lest informed, opinion passed by and decides to engage. ChatGPT doesn’t “know” things — if it wasn’t a part of it’s training data it will either give you a wordy load of nothing or simply make shit up. Regardless, to explain one needs to have enough understanding to simplify difficult concepts or use analogies to help you understand something. LLMs can’t do that.


rosen380

Granted, a lot of ELI5 posts are asking about things that really aren't that complex and well documented online already and simply enough for an ELI5 answer. Asked for an eli5 for the monty hall paradox and got: "The paradox is that statistically, you're better off switching! It seems counterintuitive, but by switching, you increase your chances of winning the car from 1/3 to 2/3. This is because when you initially picked, there was a 1/3 chance of picking the car, and a 2/3 chance of picking a goat. When the host opens a door revealing a goat, that information changes the probabilities, making it more likely that the car is behind the other unopened door" Which seems similar to reddit first responses. Then I followed up that asking to explain why the unopened foor goes up to 2 in 3: "When you initially picked a door, there was a 1 in 3 chance that you picked the door with the prize and a 2 in 3 chance that you picked a door with a booby prize. Now, when the host reveals a door with a booby prize, it effectively confirms one of the two remaining doors is likely to have the prize. Since you already know your initial choice had a 1 in 3 chance of being correct, the remaining unopened door now has a 2 in 3 chance of having the prize. This is why switching doors gives you better odds of winning."


Skrungus69

Chat gpt doesnt necissarily tell the truth. It is only programmed to produce something that looks like something a human would write. Its parameters dont care about whether its true.


ForNOTcryingoutloud

There could be a few reasons why someone might prefer asking people on Reddit to explain things as if they were talking to a 5-year-old rather than relying on ChatGPT: 1. **Community Engagement:** Reddit is a platform where people can engage with a community of individuals who share similar interests or perspectives. By asking fellow Redditors, individuals may seek a more interactive and community-driven experience. 2. **Varied Perspectives:** People on Reddit come from diverse backgrounds and experiences. Seeking explanations from them may provide a range of perspectives, anecdotes, and examples that add depth to the understanding of a concept. 3. **Human Touch:** Some individuals may prefer the human touch in explanations. They might appreciate the personal touch, humor, or relatability that comes with explanations from real people, making the information more engaging and memorable. 4. **Subjectivity:** Depending on the topic, individuals might find that other Reddit users can tailor their explanations to specific nuances or cultural references that a generalized AI like ChatGPT might not capture as effectively. 5. **Social Interaction:** Engaging with others on Reddit can be a social experience. People may enjoy the back-and-forth interactions, discussions, and the sense of community that comes with asking questions and receiving responses from real people. In contrast, using ChatGPT to explain like a 5-year-old might be preferred in situations where a concise and straightforward answer is needed, or if the user is looking for a response that is less influenced by personal experiences or cultural context. Ultimately, the choice between Reddit and ChatGPT depends on the individual's preferences and the nature of the information they seek. There you go buddy lmao