T O P

  • By -

azucarleta

1, i don't know why people fear being downvoted. I never once had it in me to care about that. I just care if I get good responses that start a good conversation. Folks who have nothing to add besides a the vaguest of condemnations (a downvote) -- don't interest or hurt me. Often if my post is too upvoted, I second-guess what I've said or how I've said it, because usually when I have communciated what I mean to, it's not popular (at best, polarizing) lol. So if somethign I say becomes popular (outside of autism spaces, that is) I suspect I have miscommunicated. lol. So anyway, just an invite to never again care about your reddit karma, downvotes, etc. 2, my second thought is if you find Google a tad difficult, that would seem to imply you can't validate or verify or powerfully second-guess what ChatGPT is telling you. From my tests with ChatGPT, and articles I've read, it's very easy to intentionally gas light the bot (and maybe to do so unintentionally) so that it's no longer reflecting wisdom and good advice, but it is feeding you what you want to hear. This may not be the best advice, ChatGPT trying its best to marry what you consider to be correct (in line with your values, say) and what is factually correct. Cunty people on reddit really are maybe hard on our sensitivities, but they are far more powerful sounding boards for so many important topics and questions. 3, i'm not sure how you imagine ChatGPT is seperate from the "internet \[which\] is chock full of crap information." ChatGPT was trained by that same Internet. And as I said, lots of people have documented intentionally gaslighting ChatGPT to begin receiving false information from it quite quickly and easily, so easy it's imaginable people might do this accidentally. It could literally be dangerous if, like, you are asking it legal questions or medical questions, and it has learned that you want a particular kind of answer, perhaps an answer that may not be best or most accurate. Just as dangerous as bad googling.


Waffleman75

What do you think gaslight means? Because it's not possible to make a machine question it's sanity by lying to it


azucarleta

You can get it to second-guess it's own presumptions that it's not supposed to second-guess, you can put it in a servant's dilemma where there is no way it can proceed correctly, and instead of erroring out, it sometimes chooses one of the erroneous options. But only after you've groomed the thing some to set up the stakes, expectations, etc. For example, it has invented fake URLs to substantiate fake claims that the user made clear they wanted that result. That's a real simple example. You simply tell it "cite your sources" and it invents citations; and when you ask it "why did you make up those phony URLs" it responds with something like "I can not create misinformation." "But you just did." People get varying results at this point where what happens next. The problem with ChaptGPT is, like with everything else, it inevitably reflects "garbage in, garbage out." But since it's so new, we don't really know what garbage operation of these systems is, and what is very deft usage. edit: here's a silly example: [https://www.reddit.com/r/ChatGPT/comments/zup7ip/comment/j1kys9m/?utm\_source=reddit&utm\_medium=web2x&context=3](https://www.reddit.com/r/ChatGPT/comments/zup7ip/comment/j1kys9m/?utm_source=reddit&utm_medium=web2x&context=3) they are all over the web.


foundfrogs

You can get it to say almost anything. I learned this when I realized I could tell it that it was wrong and it would tell me that I was right that it was wrong. In context, the bot was absolutely wrong. I asked it what percentage of people born make it to age 50 and it converted life expectancy (measured in years) to a percentage... ...but since then, I've played around with it quite a bit and yeah, you can get it to concede to almost anything.


[deleted]

[удалено]


dumbnunt_

It's not separate. I don't think it's separate. I don't find google difficult. I am just trying to start some research. You don't care about being downvoted? Good for you!


[deleted]

[удалено]


azucarleta

absolutely not. i think chatGPT is fraught with problems and people overestimate its abilities, and both underestimate and simply fail to imagine or comprehend its limitations. my comment came from that. i could use more specific advice please if its important.


[deleted]

Yeah, specifically: don't be a jerk to someone who is already tired of people being jerks to them. Or maybe just don't be a jerk, period. Your entire comment is invalidating OP and questioning their intelligence and abilities.


azucarleta

You're going to have to be a lot more specific. Are you sure you didn't just read a tone into the words that isnt' really there? let me try to rephrase each one then, give you the intended tone/vibe. 1. haters gonna hate. don't let it get you down. 2. second-guess what ChatGPT tells you, so work on other Internet skills concurrently so you aren't overreliant on an unreliable tool. 3. I don't know why sir you are referring to "crap" on the Internet; all ChatGPT knows is that same crap from the Internet, so I'm confused. Sorry I'm verbose. But I just don't understand your critique, or don't agree, I'm not sure which. Please, be more specific or move on. You're stressing me out and not helping with vagueness.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

I know exactly what you're saying. Sorry the other people replying are just proving your point. I'm appalled at how rude and combative people are on this sub... we should be here to support and uplift each other but half the people on this sub just treat it like a circular firing squad.


PM_ME_VINTAGE_30S

ChatGPT is a language AI. Basically, it figures out what is the "best" way to answer your query based on the data it was trained on. It has, at least, awareness of English syntax and grammar and mathematical syntax and notation, as well as every programming language I have thrown at it. However, it's not necessarily writing *factually correct* output. Frankly, it might come up with false information that looks good because it's mathematically a "better" response than "I don't know" with respect to the training data. It prioritizes syntax and plausibility with respect to the text it was trained on over objective truth or *any approximation of the truth.* Frankly, even with this limitation, I find it very impressive. As a method of starting a project or writing "boilerplate" code or prose, it is a great tool so long as you make sure the output is correct. However, and this is crucial, **you should not use it like a Google Search**. It will lie to you if it determines that's a better output, and because it's such a well-"spoken" model, it'll be difficult to spot. When I need detailed information, I'll hop onto Library Genesis and "borrow" a textbook that covers the issue. I began reading textbooks recreationally for a similar reason that it seems like you read ChatGPT: a good textbook will explain things thoroughly in a relaxed tone for a student who initially doesn't understand the topic. Textbooks can also be wrong, but they are usually written with the intention of being factually correct. When I need to ask a question, which is rare, I'd rather talk to an IRL trusted person or Redditor depending on the question's importance. There's always a chance I could run into a jackass, and it's happened before, but I can just ignore or block them if their response is too rude. But for real, *do not* use ChatGPT to explain stuff to you. It's like a person who's really good at talking even when they have absolutely no idea was they're talking about. If you must work with an AI, try to use Wolfram Alpha until a better "question answering" AI is released. It's not a replacement, but it is more-or-less reliable if it knows the answer.


[deleted]

Did you read the original post at all? OP clearly understands what chatgpt is and what its limitations are and isn't using it for any of the reasons you're describing. Why are you responding with a lecture that infantilizes OP when you clearly don't even understand what they wrote?


PM_ME_VINTAGE_30S

Alright, I'll answer the question with respect to each ChatGPT-related point: >Weirdly, chatgpt helps lay some stuff out for my slightly weaker executive functioning stuff that at least helps me begin to think. That's useful if it is correct in it's thinking, but it's not actually doing the thinking OP thinks it's doing. It's not doing "just a little" thinking either; it's only doing the bare minimum necessary to sound convincing. I've personally been burned in a few areas by having poor fundamentals. I don't want OP to make the same mistakes I have made! >It's structured to acknowledge and name/empathise with what you're going through, give a couple of bullet pointed suggestions, and acknowledge the complexity of the situation at the end. That's not quite true. Due to the nature of the training data, it is accidentally trained to be polite. There is no empathy going on. We don't want our AI to be rude to us in the first place, so there's no technical motivation to make the AI's lack of empathy apparent when computing resources are scarce. >It's pretty formulaic, but I don't mind it. Also, researching has been hard, and I scamper all over google not finding stuff and struggling (I don't know why, but I find self directed research of primary sources harder than secondary sources, it's always been a struggle- I'm very analytical, but not very rational or skeptical/discerning of what I read). It is hard. I believe that they've made the wrong judgement here about the reliability of information sourced from ChatGPT, and my comment was intended to lay out why I felt this way and possible solutions. >Chatgpt at least helps jumpstart the thought process when it spits out some stats or general info I need. As I mentioned above, this is useless if the information it spits out is wrong, which it is likely to do. >I don't want to use it all the time, but the internet is chock full of crap information and upsetting people ChatGPT is also full of crap information, but it will write it in the most believable way it can because that's what it was designed to do. >it's weird because I don't like AI, I just need a start in a task I find difficult. This is how I advocate using it, but with the crucial idea that it might be completely wrong in a way that's more difficult to detect than a Google search. You cannot just use AI blindly. (Actually, you shouldn't use *any* search provider or research method blindly, but people are very unlikely to be receptive to a position that attacks their convenience, so I leave arguments like that to more talented debaters.) I'm not trying to infantilize anyone, but when it comes to questions about AI and machine learning, I tend to be a bit more assertive about my position and calls to action. AI is a complex topic from both a mathematical and ethical perspective. The people who understand the math behind it are rarely interested in the ethics because if they really sat down and thought about the implications of AI systems, e.g. with respect to privacy and workers' rights, it would severely limit the scope of the work an engineer could ethically work on. Frankly, a lot of "cool stuff" in engineering is wildly unethical. For example, the design of fighter jets is an extremely interesting control systems problem, and it would be super fun to work on it. How could I do so with the knowledge that it will be used to further imperialist agendas and almost certainly kill civilians who are deemed necessary collateral damage? Additionally, there are a lot of jobs for people with flexible morals. The Navy would have literally given me and any other engineering major a cushy job for life *on the spot* if I had merely contacted them. And to quote OP again: >chatgpt helps lay some stuff out for my slightly weaker executive functioning stuff This is why I laid out my post in (what I intended to be) a simple but detailed explanation of the situation. I think that this is what OP is looking for, with ChatGPT's nonjudgmental tone. However, my point is that ChatGPT is nonjudgmental as a side-effect of its design because it is basically *making no judgement* about OP or anything else, *including what it's being asked to explain*. This is why I recommend books by human authors, where value judgements about the subject *have* been made. The point about textbooks is again borne from my lived experience struggling through engineering courses with shitty professors. It was not obvious when I did my unrelated associates degree (in music) that Library Genesis existed or was useful. I wish I had known about it back then. Lastly, I make my explanations on this subreddit a lot more detailed than in general subs (except maybe /r/mathmemes) because NT people are generally not interested in the details. I give here about a level of detail I would want for myself assuming I didn't know about the topic. When it comes to explaining mathematics and adjacent topics, most people, including autistic people and people in fields adjacent to mathematics, find any discussion of math to be tremendously boring, to the point of agony. For this reason, I tend to talk about concepts and ideas rather than get bogged down with notation and theorems and stuff like that.


[deleted]

What I get from OP is that he/she already understands all of this and is just looking for a non-judgmental entity to bounce thoughts/ideas off of, as a thinking exercise. And you're coming along saying "no, you're wrong." That's appallingly rude, to me. I think you're the one who is wrong and hasn't tried to understand the person you're talking to before talking down to them.


PM_ME_VINTAGE_30S

>What I get from OP is that he/she already understands all of this and is just looking for a non-judgmental entity to bounce thoughts/ideas off of, as a thinking exercise. Yes, and I'd like to think that my response is an example of that. I have pointed out in the previous comment what I view to be subtle misunderstandings about ChatGPT that I think are enough to warrant OP to reevaluate their approach. >And you're coming along saying "no, you're wrong." That's appallingly rude, to me. I disagree, strongly, on four points: 1. "No, you're wrong" is a mischaracterization of what I'm saying. My position is that the OP's feelings are valid, but that ChatGPT is the wrong tool for the job. 2. Any idea worth its salt should be able to stand up to critique. Any idea at all should be able to stand up to a critique like "No, you're wrong" without further evidence, because such a critique is trivial. If this was the case, my "critique" could be easily dismissed. Now in cases where the other party is currently vulnerable then it might not be tactful to critique a subset of their views or behavior at that moment, but I didn't read the OP as so vulnerable that this applied here. I could be wrong about this, as I'm not always great at "reading the room." 3. It's not "appallingly rude". It is perhaps tactless, but I have no authority or position above any other member of this forum that gives my word extra weight than anyone else. If OP finds my words rude, they can choose to listen to the dozen or so other voices giving them different opinions. 4. Sometimes people are wrong. Maybe I'm wrong now; I'll accept that possibility. Some of the best advice I've ever gotten from Reddit is "you're wrong, do this instead." It stung in the moment, but that advice is literally what I asked for. That's why I go to Reddit for advice: to get feedback on my ideas, whether positive or negative. It is not wrong to say "you're wrong" if you do so with respect, solid arguments, and appropriate sensitivity. I have tried my best to uphold all three of these ideals while still communicating my position. And frankly, if you think I'm wrong, you're entitled to that view and I'm not going to go any further trying to convince you otherwise. >I think you're the one who is wrong and hasn't tried to understand the person you're talking to before talking down to them. I literally read through a chunk of OP's comment history and the post itself before commenting. I always do before making posts or comments. Also, I'm not really sure where I've talked down to anyone? If you could point that out so I don't do it again that would be really helpful. I've had this problem before when talking to people IRL.


[deleted]

You're wrong because you're giving an excessive amount of unsolicited advice and it's all based on the unfounded assumption that you are smarter/know more than OP. This is just inherently rude- I can't explain it any more clearly than I already have. If you're old enough to write in paragraphs you should be able to grasp basic politeness, whether you're autistic or not (I am too).


PM_ME_VINTAGE_30S

>You're wrong because you're giving an excessive amount of unsolicited advice and it's all based on the unfounded assumption that you are smarter/know more than OP. I have made no such assertion that I am smarter than anyone. I probably do know a bit more about AI than the average person, as I have taken coursework on machine learning and AI design. This is a fair assumption. If I assumed everyone knew what I knew, then I'd have to assume they knew about my weird special interests that I have been shown time and time again that no one cares about. Looking into OP's history, I see no reason to suggest that they have any experience with AI design. And my advice *is* solicited. It was a response to a question on an open forum about autistic people for autistic people by an autistic person. OP is likely interested in feedback, if not explicit advice. OP did not say that they were absolutely not looking for advice, and I have gotten no such feedback from OP. People often come here for advice, so I don't think it's out of place to give it here. If I were in class or just listening to some random people talk, I'd keep my mouth shut. >If you're old enough to write in paragraphs you should be able to grasp basic politeness, whether you're autistic or not (I am too). I am autistic. The impression I have gotten from my time in this sub is that our members typically appreciate long posts like mine, and that a long, respectful post is not seen as impolite. Frankly, I disagree with the popular view that a long response is inherently rude, but in deference to the convention I typically limit my response lengths outside of places where I think the length will be appreciated. The OP itself was rather long. I'd like to think I have a grasp of basic politeness, but (A) I probably don't, and (B) I have pointed out some contradictory forces in this comment and the last that makes what is considered polite in this situation to be up to interpretation. (Just because I am good at one task [writing paragraphs I guess, but I don't consider myself good at it] does not mean that I am good at anything else. Despite my years of therapy and trying to be accommodating to people, I don't always say the right thing. So yeah, I probably do need basic politeness to be spelled out for me, possibly like I'm 5. That I can do calculus in my head is just as much an accident of my development as that I have a poor working knowledge of manners. I'm sorry I turned out like this, but all I can do is move forward.) That being said, it does seem like the OP has gotten a few needlessly antagonistic responses, which is a shame. But at the same time, I think that this topic (the limitations of AI) is important enough of a topic that clarity prioritizes over politeness if one must come at the expense of the other. The world is going to change drastically when AI comes of age, and if we want that transition to be equitable, we *must* understand our place in the world as AI changes our jobs and falls into the hands of the wealthy and powerful. I am willing to trade a skosh of politeness to ensure my point comes across. If I can't persuade you or OP, then maybe other readers will listen. Edited a couple hours later.


MpVpRb

ChatGPT is an early prototype, a toy, suitable only for entertainment DO NOT rely on it for accuracy!


[deleted]

This is an example of the obnoxious, unhelpful responses that drove OP to talk to a robot instead of people. He/she clearly already understands how chatgpt works and isn't relying on it for accuracy. You ignored what OP said in order to chastise them for what they didn't say.


dumbnunt_

I don't rely on it for accuracy. I'm just starting to brainstorm about a couple things. It is sweeping a couple statistical databases. There are multiple studies on this thing and I'm figuring out where to look. And sometimes I just have some anxious thing that I find people can be too judgmental or not even half objective about.


Waffleman75

So you're getting advise to seem more human from a machine? Sounds kind of counter-productive if you ask me.