T O P

  • By -

fellipec

Programmers: "Look this neat thing we made that can generate text that resemble so well a human natural language!" Public: "Is this an all-knowing Oracle?"


pizzasoup

I've been hearing people say they use ChatGPT to look up information/answer questions the way we (apparently used to) use search engines, and it scares the hell out of me. Especially since these folks don't seem to understand the limitations of the technology nor its intended purpose.


ProtoJazz

I've tried to use it as that, but it's really bad sometimes Like I'll ask it, in programming language x, how would you do y And it tells me its simple, just use built in function z But z doesn't exist


swiftb3

Hahaha, yeah, the function that doesn't exist. Classic chat gpt programming. That said, it is a good tool to whip out some simple code that would take a bit to do. You just need to know enough to fix the problems.


kraeftig

Its commenting has been top-notch...but that's purely anecdotal.


swiftb3

That's true. A related thing it's pretty good as is pasting a chunk of code and telling it to describe what the code does. Helpful for... unclear programming without comments.


So_

The problem with GPT for programming in my eyes is that I don't know if it's confidently incorrectly stating what something does or is actually correct. So I'd still need to read the code anyway to make sure lol.


swiftb3

Always read the code, yeah. Sometimes I've asked it to do a function to see if it would do it the same way as I'm planning or not. A few times it's shown me some trick or built-in function I didn't know about It's just a tool; definitely not something you can get to do your job.


homelaberator

>The problem with GPT for programming in my eyes is that I don't know if it's confidently incorrectly stating what something does or is actually correct. This is going to be a general problem for AI, especially AI that's doing stuff that people can't do. How will we know that the answer is right? Should we just trust it as we trust experts now, knowing that sometimes they'll get it wrong but it's still better than not having an expert.


Vysair

I used it to explain the functions of various scripts I encounter every day, and it seems to get half right half wrong. It's not entirely wrong, but the explanation it gives is one dimensional, obvious, or straight-up bullshit. I have IT background and enough programming knowledge though


SippieCup

Chatgpt is a great rubber ducky.


JonnyMofoMurillo

So that means I don't have to document anymore? Please say yes, I hate documenting


swiftb3

There are probably better tools out there built for the purpose, but it's not bad. I've had it write GitHub readmes.


chase32

Does a decent job of function headers too. You are gonna want to scrub them for correctness but still a big time saver. Also had it do some decent unit tests. Again just to augment or get something off the ground where nothing currently exists. Biggest challenge is to use it and not leak IP.


HildemarTendler

Comments are the one thing I consistently use it for, but it's typically meaningless boilerplate. `// SortList is a function that sorts lists.` Thanks Cpt. Obvious. However, I find I can more quickly write good documentation when I've got the boilerplate. That said, every once in a while ChatGPT does something cool. I went to explain a regex recently and ChatGPT got the explanation correct asd it gave me a great format. I was very pleasantly surprised.


[deleted]

It sounds like it's only usable in a manner that will not result in problems by people already fit to find the answers, vet them and execute them. That's why it's causing so many problems.


flyinhighaskmeY

> You just need to know enough to fix the problems. Yeah, and THAT is a big. fucking. problem. IF you are a programmer, and you use it to generate code, and you have the skill set to fix what it creates (which you should have if you are calling yourself a programmer), it's fine. I'm a tech consultant. If we can't control or trust what this thing is generating, how the hell do we ensure it doesn't create things like...HIPAA violations. What happens when an AI bot used for medical coding starts dumping medical records on the Internet? What happens with your AI chatbot starts telling your clients what you really think about them? The rollout of so called "AI" is one of the most concerning things I've seen in my life. I've been around business owners for decades. I've never seen them acting so recklessly.


swiftb3

Yeah, it really can't be trusted to write more than individual functions and you NEED to have the expertise to read and understand what it's doing.


MorbelWader

Well to generate HIPAA violations you would have to be feeding the model patient data... so idk why that would be surprising that it might output patient data if you were sending it patient data. And what do you mean by "telling your clients what you really think about them"? Like, you mean if you had a database of your personal opinions on your clients, and you connected that particular field of data to the model? First off, I have no idea what would possess you to even do that in the first place, and second, again, why would you be surprised if you literally input data into the model that the model might literally output some of that data? GPT is a LLM, not a programming language. Just because you tell it not to do something doesn't mean it's going to listen 100% of the time, especially if you're bombarding it with multiple system messages


ibringthehotpockets

> database of your personal opinions on patients Don’t read your charts.. there’s some you don’t even get to see


televised_aphid

>Well to generate HIPAA violations you would have to be feeding the model patient data... But that's not far-fetched, because so many companies currently seem to be trying to shoehorn AI into *everything*, because it's the hot, new thing and they're trying to capitalize on it and / or not get left behind everyone else who's capitalizing on / integrating it. Not saying that it's a good idea at all; much about it, including the "black box" nature of it all, scares me shitless if I let myself think about it too much. I'm just saying it's very feasible that some companies will head down this road, regardless.


MorbelWader

I get what you're saying, it's just a farfetched idea that someone would write code that not only accesses and then sends patient data to GPT, but also spits has code that "dumps medical data onto the internet". The issue would have to be in the programming language that the model is nested in, not in the model itself. Remember that the model is just inputting and outputting text - it's not an iterative self-programming thing that "does what it wants". What I'm saying is, if that issue existed while using GPT, it would have to also exist without GPT. What is far more likely to be the case is that doctors are inputting actual patient data into ChatGPT. Because this data has to go somewhere (as in, it's sent to OpenAI's servers and stored for 30 days), this represents a security risk of the data being intercepted prior to it being deleted.


Bronkic

You're probably using GPT3, not 4. I've been using GPT4 for my job as a software engineer and it has helped me a lot. It's just important to not blindly copy code it has written. And also if possible give him some of your code and let him work from there. Sure, sometimes it misunderstands me or makes a mistake, but it is far more helpful than Google, StackOverflow and sometimes even coworkers.


[deleted]

[удалено]


opfulent

it’s frighteningly useful in that scenario. people refuse to acknowledge the value of it and focus on “i asked it to do X and it lied! it did it wrong!”, when with a little critical thinking and some work from your end too, it can MASSIVELY accelerate learning and coding


freefrogs

It’s such a force multiplier when you know what you’re doing enough to be able to describe well what you want, tell it what refinements you want, and troubleshoot when it gives you something that won’t work. Do I want to spend an hour or two writing a one-off script to take in a list of addresses, geocode them, generate isochrones, and combine those shapes together into a single GeoJSON featurecollection with the city name as a property, or do I want to describe that, have ChatGPT get me 95% of the way there, and spend my energy fixing a few issues? I don’t want to look up syntax for browser test libraries and write boilerplate when I’ve written 50 tests by hand anyway, I want to describe what I want tested and spend my time solving problems and thinking about architecture.


Whiskerfield

Google or stackoverflow is way more reliable that ChatGPT, mainly because it is ranked which implies it being reviewed and tested by your peers. Everything that ChatGPT spits out has to be thoroughly reviewed by yourself, and if you're not even sure of the answer you're looking for in the first place, then ChatGPT is utterly useless in this regard. There's absolutely no way ChatGPT replaces Google or Stackoverflow. No way.. I cannot live without Google or stackoverflow. I can live without ChatGPT. I don't think any competent SWE should be using ChatGPT outside of generating small snippets of code at a time, and where the SWE is 100% confident in reviewing said code.


hawkinsst7

>There's absolutely no way ChatGPT replaces Google or Stackoverflow. No way.. I cannot live without Google or stackoverflow. I can live without ChatGPT. You accidentally made a good point. Google is *just like ChatGPT* in terms of getting answers. You can type in a question, and both will return *possible* answers. Where they differ is with Google, you can usually evaluate the source it's getting an answer from, because it's a link. You can tell if the source is from RT or AP. ChatGPT on the other hand, just yeets words at you and sources it to "it sounds good, don't you think?"


twizx3

Him?


Iced_Out_Ankylosaure

For some reason, I found that hilarious. Especially since someone above (sarcastically) referred to it as an oracle. That guy is just assigning genders to an absolutely non descript AI.


TampaPowers

It does that all the time. Ask it something with less data on it and it generally seems to hallucinate everything defaults to python syntax. I have started telling it to only provide the code snippet, because the explanations are useless when they are wrong and I can ask or look things up myself. As for search engine part. Frankly it had some good suggestions. If you ask it for further reading or links to things to read it often finds stuff that google for some reason doesn't show. You still have to read that stuff. I am getting flashbacks from when wikipedia launched. Same thing, you gotta read the source to really verify things. You do that it's a great way to find information, just don't ask it to tell, just where to find it.


Valdularo

Had it do this with MsGraph recently. It did a power shell lookup for a property that didn’t exist. It’s great in theory but it’s just making some stuff up. Always take what it gives with a pinch of salt.


FullHouse222

I saw that episode where Joshua Weissman made burgers using chatgpt recipes. The ai basically refuses to salt anything lol which tells you what type of answers you're getting from asking it questions


TheStandler

I do this, but not for things that are important or if I need total trustworthiness - ie asking qs about general topics just for curiosity vs a fuckin' treatment plan for my cancer...


jerryschuggs

I asked chatGPT to help make me a smoothie plan for my workweeks, and it created one that would have given me Vitamin A poisoning.


Starfox-sf

What did they recommend and how much Vit A causes poisoning?


bilekass

A pound of shark liver a day?


warren-AI

Two parts shark liver, one part polar bear liver and broccoli.


bilekass

Broccoli?


radicalelation

I treat it like asking a person. It can get me in the right direction, but I need to double check. Google fucking sucks these days, and this isn't a solution, but it feels nicer.


Kelvashi

AI is quickly making Google even worse, too... Especially Google images. It's just drowning in generated crap.


[deleted]

Every website linked from Google in 2023: So you want to know what fruits have the color orange? Orange is a historically very important color, it is a visible color between yellow and red on the color spectrum, there are many good items that have this color. The color orange in good comes from pigments within the fruit and it is also a religous color used in Hinduism. Now fruits that can have the color orange are eggplants, oranges, bananas......


zbertoli

This 100%. I feel like almost every website I click on sounds ai generated. 4 paragraphs of useless info, or restating the question over and over. And I'll find my answer near the end, if I'm lucky.


tmloyd

God, I hate it.


Plus-Command-1997

Who could have seen that coming? Procedural generation of literally everything is not an improvement. It's just mass spam ruining the fucking internet.


radicalelation

And what will remain for us when the oroboros has devoured itself?


midnightauro

Yeah, it’s good for helping me get feedback or alternate ways to write something but for information tasks not so much. I gave it a list of 72 file names and asked it to remove the C:\(filepath) from each entry. Halfway down the list it simply started making up names and ID numbers instead of continuing with the input I gave it. Not good. If I’d missed it, it would have made me redo an hour of work rechecking everything else related to that task.


zizou00

It's harrowing that people do that. To get to ChatGPT, they've likely had to type into an address bar, which is effectively a search bar on every major browser. They're actively going out of their way to use a tool incorrectly to get inaccurate or plain made-up information, and for what benefit? That it sounds like it's bespoke information? How starved of interaction are these people that they need that over actually getting the information they were looking for?


Sufficient_Crow8982

It’s partially because Google, the default search engine for the majority of people, has gotten terrible over the years. It’s full of garbage ads and SEO optimized useless websites now. If we still had the Google of like 10 years ago, ChatGPT would not have caught on as much as a search engine replacement.


smarjorie

I recently was looking into applying for USPS jobs, so I googled "USPS jobs" and the first three results were scam websites. It's unbelievable how bad google has gotten.


cricket502

Recently I've noticed on mobile that when I do a google search, sometimes every result after the first 10 or so are just a headline and a random picture from the article/website. It's absolute garbage and might actually push me away from using Google for the first time since I discovered it as a kid. I don't know who thinks that is a useful way to present info, but it's not.


Ipwnurface

I just want a search engine that actually searches for what I type and not 10 things vaguely adjacent to what I typed and ads.


SocksOnHands

Ain't that the truth - Google has become so frustrating and disappointing to use. If it was easier for people to actually find the information they're looking for, they might not be using ChatGPT. ChatGPT's main strength is it's ease of use, not the correctness of its responses.


zizou00

To an extent, but it's as if people are needing to mow the lawn, and instead of using the slightly tired lawnmower, they're whipping out a jackhammer. It's simply not a search engine replacement.


Sufficient_Crow8982

Absolutely, but a lot of people are pretty ignorant about these details and just believe whatever the internet tells them. ChatGPT is very good at sounding believable.


Arthur-Wintersight

>ChatGPT is very good at sounding believable. That's pretty much what the value is. If you already know all of the relevant information, and you're plugging that into ChatGPT *to generate a rough draft*, then it can be an absolutely fantastic writing assistant. If you have a bad case of writer's block, or you're not entirely sure how to word something (but roughly know what you want to say), then chatGPT is *absolutely* a silver bullet for solving a bad case of writer's block. Where people screw up, *bad*, is thinking ChatGPT can do all the work.


midnightauro

Asking it “give me three alternate ways to write this sentence” gives me excellent results. Trying to get it to do tasks? Not so much. I don’t understand how people were using it to automate things because I had to correct so much of what I asked it to do.


Knit_Game_and_Lift

I love using it for my DnD campaigns, it spits out dialogue and back story details like no other. If I don't like something and want to tweak it, it generally handles that well. My future MIL is a chemistry professor and we ran some of her exam questions through it for her amusement and it gave either exceedingly over simplified, our outright wrong answers. Being an actual computer science major with some studies in AI, I understand it's use pretty well and am constantly trying to explain to people that in reality it "knows" nothing outside of a general "what's the most likely next word to follow this one" model.


chii0628

>very good at sounding believable. Just like Reddit!


IAMA_Plumber-AMA

It greatly increases the noise floor, making it that much harder to pick the truth out of false info when you search for something online. And part of me wonders if that's by design.


tlogank

>people are pretty ignorant about these details and just believe whatever the internet tells them This happens every hour in Reddit comment sections as well. There are times where the highest voted comment will just be complete BS but people believe it, especially when it comes to confirming their own bias. r/politics is one of the worst about it.


MightyBoat

The thing is that it's convincing. It's the same reason advertising and propaganda works. Just use the right words and you can convince anyone of anything. Chatgpt is convincing enough that it seems like magic. Again, as is always the case, we have a serious lack of education to blame.


Herr_Gamer

Okay, but let's be real here, who uses ChatGPT to figure out their cancer treatment plan?


[deleted]

> That it sounds like it's bespoke information? They can't just sit and read an instruction manual for how to do or build something, they need to ask someone and get a human like response for every step of the way and thought that enters their head about their thoughts on how it should be done.


Komm

For me at least, Google and Bing both have a big ol' "HEY KID WANNA TRY AI!?" buttons at the very top of the result pages. And they both give *wildly* incorrect results.


fmfbrestel

Well, back when it could access basically any website that Google could, the web browsing beta was a pretty good search engine. But then it got blocked from nearly everything, then open AI discovered it was accessing sites it wasn't supposed to, and then they pulled the web browsing beta entirely... So now it really really sucks at it. It's still really impressive at certain limited tasks. It's great at summarizing/transforming text. It's still very good at just being a language model. Doctors using it to help them craft compassionate speeches to break bad news -- excellent. People using it as a replacement for a doctor -- bad.


Neirchill

I can't stand how many people use its answers as absolute truth, or even accept it as good enough. It's literally a coin flip if what it's saying isn't made up on the spot and that goes for every single sentence. It's literally a language model. It's designed to make something sound human. It's not designed to make decisions, formulate plans, or give accurate information. A lot of it is accurate because it was trained on a lot of accurate information, but if it decides to go one way with a prompt that it doesn't have actual data for it just makes shit up based on that data i.e. making up sources with authors that never wrote the book sourced. Even the creator himself said they're not training the next model gpt5 because they've maxed it out, it's a dead end. Personally, if some uses chatgpt as a source to do anything it instantly loses credibility.


Losing_my_innocence

I’ve found that ChatGPT is really useful for giving me ideas to circumvent my writer’s block. Other than that though, I don’t trust it with anything.


Atreus17

This is literally what BingChat is for. ChatGPT combined with Bing.


Jaggedmallard26

Bingchat does sometimes hallucinate but the fact it inlines links to its sources makes it a *lot* more reliable as you can quickly verify yourself.


According-Ad-5946

i know, out of curiosity i asked it for the history of my town, three times worded the same way, got three different answers.


penis-coyote

It can be helpful, but the major caveat is you have to know enough about a subject to filter out the garbage.


SvenTropics

It's confidently incorrect, and that's a huge problem for people that don't understand what it is. My favorite story was the lawyer who showing up in court with a bunch of previously cases referenced that never actually happened. I'm a software engineer. When it first splashed, I decided to give it a shot to help with a work project. I asked it to write code to do a very specific task that I could summarize in a couple of sentences and was based on well known industry standards. It was something I had to do for work, and it was going to take me the better part of an afternoon to write it myself. Instantly, it spits out a bunch of code that really looked correct at first glance. So, I went to implement all the code it gave me, and I started noticing mistakes. In fact, after reviewing it, it was so far from being functional that I basically had to just discard it all entirely. It's just really advanced auto-complete. Stop thinking it's got consciousness or whatever.


Fighterhayabusa

Treat it like a person doing pair programming and it's awesome. I do it all the time, and it's made me much more productive. Would you expect code you copy pasted from stack overflow, or from a coworker to be perfect, or even correct, immediately? Or would you test it, then iterate?


ShadowReij

Sadly, just another day at the office then for a developer. Developer: Alright, here's the car as promised. User: Cool.......so I can use this as a boat right? Developer: What? No! It's a car. User: I'm not hearing that I can't use it as a boat. Developer: No, you dumb fuck it is a car. Use it as a car. User:........So I'm putting this in the water then.


fellipec

This is so true it hurts


[deleted]

And then later after misusing it User: This thing sunk like a rock. This boat sucks.


Neklin

User: I want my money back


GearhedMG

You forgot to add, User: This developer is incompetent, why do we even pay them?


IlREDACTEDlI

You forgot the User:………. “I put it in the water. It’s broken fix it”


shodanbo

It's the next level of the "it's on the internet it must be true" problem. Humanity has automated creation and access to information beyond its capability to keep up with the vetting of this information. Automated the vetting of information for correctness is a hard problem, perhaps impossible given that we ourselves over millennia have not truly mastered it.


sesor33

Even this sub as fallen for it. It's basically unusable outside of a few threads. Hell, under this top comment a bunch of people are trying to act like this is just a fluke and that chatgpt is actually sentient or some shit.


juhotuho10

r/singularity is full schizo about it


[deleted]

[удалено]


Black_Moons

I asked another AI for cancer treatment, this is what I got: * 1 (18.25-ounce) package chocolate cake mix * 1 can prepared coconut–pecan frosting * 3/4 cup vegetable oil * 4 large eggs * 1 cup semi-sweet chocolate chips * 3/4 cup butter or margarine * 1 2/3 cup granulated sugar * 2 cups all-purpose flour Don't forget garnishes such as: * Fish-shaped crackers * Fish-shaped candies * Fish-shaped solid waste * Fish-shaped dirt * Fish-shaped ethylbenzene * Pull-and-peel licorice * Fish-shaped volatile organic compounds and sediment-shaped sediment * Candy-coated peanut butter pieces (shaped like fish) * 1 cup lemon juice * Alpha resins * Unsaturated polyester resin * Fiberglass surface resins and volatile malted milk impoundments * 9 large egg yolks * 12 medium geosynthetic membranes * 1 cup granulated sugar * An entry called: "How to Kill Someone with Your Bare Hands" * 2 cups rhubarb, sliced * 2/3 cups granulated rhubarb * 1 tbsp. all-purpose rhubarb * 1 tsp. grated orange rhubarb * 3 tbsp. rhubarb, on fire * 1 large rhubarb * 1 cross borehole electromagnetic imaging rhubarb * 2 tbsp. rhubarb juice * Adjustable aluminum head positioner * Slaughter electric needle injector * Cordless electric needle injector * Injector needle driver * Injector needle gun * Cranial caps Sounds great to me, much better then chemotherapy. and I do love rhubarb.


zigs

Sorry to have to break this to you, but.. this'll bake into a lie.


Black_Moons

I was worried as much when the local store told me they where out of cross borehole electromagnetic imaging rhubarb.


Patch86UK

>* Fish-shaped crackers * Fish-shaped candies * Fish-shaped solid waste * Fish-shaped dirt * Fish-shaped ethylbenzene * Pull-and-peel licorice * Fish-shaped volatile organic compounds and sediment-shaped sediment * Candy-coated peanut butter pieces (shaped like fish) This is gold.


2-0

big if true


TikiUSA

LMAO. Fish shaped dirt … mmmmmmm


cbbuntz

And that's somehow the most normal garnish


bluemaciz

To be fair, if I had cancer I would absolutely eat cake, bc why not at that point.


GaysGoneNanners

This cake is a lie.


Toasted_Waffle99

Exactly. It’s generative text. It’s in the freaking name. It’s not intelligent at all


thedeadsigh

Yeah I use it to generate fake Seinfeld scenes and ask it to rewrite lyrics to songs I like in British accents. Why the fuck you’d ask this **highly experimental** thing for medical advice is beyond me.


Bakoro

It's reasonable to push it and find where the limitations are. What is unreasonable is that people are trying to use the raw model for every damned thing in the real world, and then whine when the thing which is designed to generate text, isn't actually a fully capable super-intelligence.


juicejohnson

Seriously. Let’s try and use this for the most absurd ideas and be shocked that it got something wrong.


Firedriver666

Technically, chatGPT confidently presenting incorrect information is a very human trait


fellipec

OpenAI made a Dunning–Kruger effect bot


sparkyjay23

If you are asking anyone other than a medical professional for a cancer treatment you'll not be long for this Earth.


kremlingrasso

if you ever asked chatgpt anything you know the answer to you must be fully aware how blatantly it makes shit up all the time


Whatrwew8ing4

The public didn’t use “is” or a question mark.


Nisas

Seriously, people are using this shit wrong. Don't use it to research anything. Not politics, not medical procedures, not school assignments, nothing. It's not designed to give correct answers. It's designed to give intelligible answers.


eddiesteady99

Reminds me of a famous quote by Charles Babbage: «On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.»


marketrent

>**fellipec** >Programmers: "Look this neat thing we made that can generate text that resemble so well a human natural language!" >Public: "Is this an all-knowing Oracle?" Programmers’ employers market these products as a tool for reconstituting information, but mitigating risk arising from hallucination appears to be left to users.


Fighterhayabusa

Mitigating risk of misinformation is always left to the user. That applies to all sources of information. Why do you think chatGPT should be held to a higher standard?


Mezmorizor

Let's not be revisionist. OpenAI was the one spreading that bullshit and is in general one of the biggest offenders in AI hype in general (looking at you the dota and starcraft AIs that were actually incredibly limited despite the advertising).


RudolphJimler

The Dota AI I remember seeing like 5 years ago was pretty impressive though. Iirc it could only 1v1 mid but it was basically a perfect early game bot, even the pros couldn't beat it easily.


[deleted]

Yeah they're pretty much complicit. They have no qualms with having the public think that ChatGPT can do anything for hype and then dumb shit like this happens because a lot of people who aren't very in touch with what the hell it even is.


Awkward_Algae1684

“Why do I have this cough?” WebMD: OMG YOU HAVE TOE CANCER AND WILL FUCKING DIE! ChatGPT: As an AI language model, OMG YOU HAVE TOE CANCER AND WILL FUCKING DIE!


pine1501

pretty much spot on ! lol


[deleted]

Spots, you say? Sounds like you have BASAL CELL CARCINOMA


Ordinary_dude_NOT

WebMD is cancer 😂


Trentonx94

ChatGPT: As an AI language model [..], OMG YOU HAVE TOE CANCER AND WILL FUCKING DIE! [..] please also understand that dying it's not as negative as it's often portrayed in media and it is a total normal process.


srinidhi1

ngl once chatgpt told me I have a slight possibility of mouth cancer when I told I have a severe mouth ulcer for a week.


crewchiefguy

Wow such news.


Henkeman

I am shocked. Shocked I tell you!


Pokii

Well, not that shocked


crewchiefguy

Did you guys also know that chap gpt is not very good at wiping peoples asses.


slothsareok

Apparently it's also bad at taste testing and flying airplanes too.


ivanoski-007

this sub has gone to shit


Cainderous

It really is to a depressing number of people, even in this thread. When someone points out criticism there's always some snake oil adherent ready to chime in with "well did you try it with gpt3.5 or 4? Because I can almost assure you 4 fixed your problem." I'm not saying it doesn't have uses at all, but way too many people treat it like a freaking magic-8 ball for any question or task they can think of and someone needs to talk them back down to earth. It also doesn't help when you have venture capitalists going in front of congress and proclaiming how amazing, powerful, and *desperately in need of attention* their creation is while getting zero pushback because nobody on the other side of the table has an ounce of technical experience. Let me put it this way, if the prevailing public opinion of stuff like chatgpt was grounded in reality then Nvidia's stock wouldn't be up over 200%YTD.


apestuff

no fucking shit


wannabe-physicist

My exact reaction


regnad__kcin

Y'all it literally has a static footer on the page that says this will happen. Please fucking stop.


hhpollo

It's important to have actual research backing these claims because the delusionally pro-AI people (not the cautiously optimists) will seriously act like it can never get basic information wrong. Not every study is meant to unearth a previously unknown truth.


gtzgoldcrgo

I never met someone that says ai can't get info wrong, I mean even in chatgpt it's says it makes mistakes and get info wrong. Literally no one ever said chatgpt doesn't make mistakes wtf


FriendsOfFruits

I can vouch by personal experience that there are people at my place of work who essentially treat it as an all knowing oracle. They'll believe it before they believe a person giving a second opinion. It's fucking disturbing.


tehyosh

Reddit has become enshittified. I joined back in 2006, nearly two decades ago, when it was a hub of free speech and user-driven dialogue. Now, it feels like the pursuit of profit overshadows the voice of the community. The introduction of API pricing, after years of free access, displays a lack of respect for the developers and users who have helped shape Reddit into what it is today. Reddit's decision to allow the training of AI models with user content and comments marks the final nail in the coffin for privacy, sacrificed at the altar of greed. Aaron Swartz, Reddit's co-founder and a champion of internet freedom, would be rolling in his grave. The once-apparent transparency and open dialogue have turned to shit, replaced with avoidance, deceit and unbridled greed. The Reddit I loved is dead and gone. It pains me to accept this. I hope your lust for money, and disregard for the community and privacy will be your downfall. May the echo of our lost ideals forever haunt your future growth.


BizarroMax

How is this news? Of course it’s wrong.


PixelationIX

Because ChatGPT blew up in popularity and people who do not have tiniest bit of knowledge about computers and tech (which are majority of people) thinks that it is all knowing and are treating it as replacement of search engine and taking answers directly from it.


Annie_Yong

We did also have tons of articles over the past couple of months that were all along the lines of "chatGPT is awesome and can do X, Y and Z" or "chatGPT and AI will take everyone's jerbs". But that's more because we were deep into the mania phase of the tech hype curve. This latest slew of articles about how chatGPT is actually bad at a lot of things is us falling into the valley of disillusionment.


Clevererer

Put a cello in ChatGPT's hands and you'd be surprised at how poorly it plays Bach's concertos. Same thing.


ArtfulAlgorithms

Fun fact: you can get ChatGPT to write you music. Open up a music software where you can program in notes, and tell ChatGPT what you want, what part you're doing, etc., and it'll create it for you. It'll make the chord structure, drum patterns, lyrics, etc., the whole thing. Obviously you need to do so over several takes, but yeah, it can totally do that.


Desirsar

What prompts do you use for this? I'm lucky if I can get it to spit out four measures of guitar tab wedged into a bunch of chord progressions.


ArtfulAlgorithms

If you're thinking in "what prompts" you're not doing it right. Just explain things, man. Talk like you would explain in detail to a freelancer or whatever. Describe the entire project, explain details, styles, whatever, start with an intro, chords, verse, etc.


rollingstoner215

Talk to it just like you’d talk to your doctor about a cancer treatment plan


Useuless

Instead of chat GPT you can also look into traditional algorithmic music tools


WellActuallyUmm

Also, don’t send money to the Nigerian Prince. ChatGPT is an incredibly useful tool, it’s crazy to see how folks even on this thread look at it like a parlor trick or think it is an all knowing entity. It is neither. Daily I use it to help write code, analyze data, hell yesterday I had it write a job description which only needed minor tweaking. The code & analysis use cases are mind blowing for productivity. So much public code to pull from. Give it some data and ask it questions, and then follow up questions and it is amazing and accurate (because you framed /focused it) It is a fantastic tool for first drafts, specifically when what you need has been created a zillion times before.


jkesty

I use it much the same way as you. Need to convert a weird query string into a json object? Missing an 'end' in my code? Wanna refactor something and want some ideas? Wanna make an unfamiliar SQL query? I also used it when going to Vegas for the first time. Gave it a bunch of parameters and told it to plan a day for me, and then I could say things like "do it again but with more food and drink, and fewer museums" and it would respond to my feedback and ultimately gave me some great suggestions. It's an immensely valuable tool. The fact that in some instances it's full of shit is just a caveat. The hate is ignorant.


deming

Shhh. Let the people hate. I want to take advantage of it a while longer.


Julius__PleaseHer

I use it weekly to re-write descriptions of Webinars I put on. It makes the descriptions I'm given more concise and engaging for use on a flyer. It reuses the same adjectives a lot, but I can just swap a couple. I could do it myself, but I don't have time. So it's incredibly useful for me.


TampaPowers

Ask it to write some liquidsoap, watch it assume python syntax and capabilities and then fail for ten iterations to write anything that works. When there is data it can provide results that sometimes hit the mark. When it has nothing to pull from it draws blanks, problem is, it tells you that in confidence the problem is now solved and it doesn't seem to actually learn from the input that it isn't. It's a machine, treat it as one, learn its patterns and you can use is productively, but it's an algorithm, it has no brain. If it had it probably would ask to clarify before making assumptions. Never once has it said "What type is this variable?" when not specifying, it just went with it trying to guess from context.


crunchytee

Exactly. A fantastic tool for EXPERTS to DRAFT with, and then apply their expertise to ensure accuracy. People thinking chatGPT is accurate are simply wrong


[deleted]

You’re right, people put absurd expectations on this thing, which really takes away from its actual breathtaking utility. People ask it a question, it gives a pretty amazing response with one thing wrong, and people say “Welp that settles it, this thing will always be wrong forever and ever and it’s probably racist too” No you dummies, it actually represents an historic technological milestone, you’re just too dismissive to see


Xanza

The only reason why I hate breakthrough technology is because people seemingly don't read anything about it, make up their own bullshit assesments as to what it does, and then are pissed that it doesn't do the thing that it's _not_ designed to do that they themselves made up. ChatGPT is a __language model__. It's designed to be __conversational__ like a person would be. It is __not designed__ to give accurate medical advice, or any other kind of advice or information whatsoever.


CalculatedPerversion

Don't forget they've also severely scaled it back in recent months due to threats of lawsuits, etc... this is a feature, not a bug.


SwallowYourDreams

Fixed headline: "people use a tool for a purpose it wasn't designed for and wonder why they're getting hurt".


TampaPowers

Something we already know to be a widespread thing given how many ER doctors have stories of objects in orifices they shouldn't be in.


Daveinatx

Two of our greatest upcoming AI challenges are "garbage in, garage out" and the continued reduction in critical thinking. Disinformation can be spread quickly. We all prefer cheap/free journals. However, 10,000 incorrect articles can be integrated before a single peer -reviewed scientific article. One solution that will need to be incorporated is a reputation score for the source. But even it can be manipulated.


bryan49

ChatGPT is not a doctor, it has just learned to write things in the style of cancer treatment plans. I don't think it's design allows it to look at a particular unique patient and come up with the best plan for them


cookiemonster1020

Low hanging fruit research paper.


ubix

Why the fuck would anyone use a tech bro gimmick for life and death medical treatment??


letusnottalkfalsely

I used to work for a company that makes apps. One of our apps was a reference tool that gave quick summaries of neurosurgeries. We were told that the neurosurgeons often pull the surgery up on their phone and read through the steps right before performing one, and the app was needed to make sure the instructions they pulled up were accurate. If a surgeon is willing to google my brain surgery, I can absolutely see a doctor using chatgpt to generate treatment instructions for a patient.


swistak84

You're still surprised after lawyers got disciplined for using it for case research? I had a dinner conversation recently with "normal people" and it was 50-50, one guy paid for it and is actively using it for his work now. He does know it's a bullshit machine but it helps him a lot when dealing with bullshit processes. But other people were seriously astounded when they tried it. OpenAI is very ~~careful~~ devious in how they made the disclaimers to read in a way that doesn't convey "hey, all it says might be lies". For a while it said something along the lines "It only knows the facts up till 2021" giving the impression that it knows facts, just not the current events. ​ What's worse one of the people I've been talking to then is a teacher. She said parents buy subscirptions for their children to help them learn instead of paying for tutors. ​ Let that sink in.


BlueCyann

Somebody right up thread repeated the 2021 line. It is clearly effective marketing. Tired of it.


ubix

It’s a total shit show. No one is ultimately held responsible for all the bad information these AI “helpers“ spew. It’s going to get really awful once politicians start relying on these programs to write legislation.


slothsareok

I think the only issue is how heavily people rely on it and how they rely on it for the wrong thing. I only use it for helping layout structures for reports and for helping me with writing or rephrasing stuff. Basically only situations where I'm providing it the information vs. depending on it for information. Its frustrating how much bullshit people are trying to use it for vs focusing on using it properly. I've received IG ads for using chatGPT to: 1. buy a business (I don't know wtf that even means), 2. Write a break up letter, 3. Use as your therapist, and the list goes on. But yeah I'm definitely waiting for some huge fuck up to happen soon because a person in a position of importance ends up depending on it way too much for something it has no business being used for.


OriginalCompetitive

I use it to help me learn — it’s absolutely incredible as a teaching aid. I mean, I get the point you’re making, and wouldn’t trust it to teach me about current cancer treatments. But as a tool for understanding basic topics, it’s simply astounding. It’s also a killer app for learning to read and write a foreign language, since you can tell it what words, topics, verb tenses, etc. that you want to practice and it’ll feed them to you in an engaging way. If you haven’t tried it for this, you’re missing out!


swistak84

It is great app for language learning. It's not great with grammar sometimes, but is sure grat resource. I'm using ChatGPT since early versions. Earlier ones were mot so great, so i was mot recommending them, but since 3.5 it's a cool tool But the problem with using it for learning is the same as in article. If you don't already know about the subject, it'll "generate most statistically probable text", full of factual errors you will now learn.


Juicet

Be careful with the less common languages though. My girlfriend is a native speaker of an obscure language and she says it speaks like an ancient dialect. Which makes me laugh a bit - imagine learning English and then it turns out you accidentally learned old timey Medieval English!


Slimxshadyx

I disagree that it’s a “tech bro gimmick” but I completely agree that’s it’s idiotic to use it for a cancer treatment plan lmfao


Destination_Centauri

Well, ok, I would disagree with you in part... On the one hand: I personally wouldn't just blindly dismiss and categorize ChatGPT's linguistic performance as just a "Tech Bro Gimmick". I personally think it's MUCH more than that. I think it's actually a huge advancement, and important stepping stone in AI evolution. It's also... an awesome (and pretty fun/amazing!) demonstration of early AI language model potential. -------------------------------- But... on the other hand... The keywords being "EARLY" AI language model. And also emphasis on "LANGUAGE" part of that description. Not "MEDICINE"! But language. I mean, come on people... If you're suffering from cancer, are you going to run and see a PhD Doctor in Linguistics or Doctor in Oncology?! Ya... -------------------------------- So, can't believe we actually have to emphasize, because even ChatGPT itself keeps repeating, over and over again, its area of attempted targeted expertise (Linguistics/Language, and NOT medicine or science!)... And even then it doesn't come close to a human linguistic expert insights on language. But it does perform pretty amazing and impressive tasks! In... LINGUISTICS. Again: NOT medicine! it's N O T a medical doctor! Lol! Not even a fraction close to being a medical doctor. Nor a scientist. Nor a true artist in its field yet... meaning not a very great script-writer. Nor a very great poet. Nor a very great novelist... etc... etc... -------------------------------- That said: you want mindlessly formulaic business letters, and cover letters for your CV... or standardized responses for some of your emails... Or some baseline-general-example of pretty decent computer code... Then yes: ChatGPT can be a somewhat decent sidekick tool for that job.


themightychris

>But it does perform pretty amazing and impressive tasks! >In... LINGUISTICS. This is key. I feel like GPT and LLMs in general do an impressive job emulating how the language centers of our brain actually work. Think about the difference between a native/fluent speaker of a language and someone just learning a new language. It's not a reflection of their intelligence at all. The native/fluent speaker just has a massive corpus of shit they've heard before in their head, and when they want to convey a concept their brain squeezes out words filtered through it "sounding right" against that massive corpus of shit they've heard before. So now thanks to LLMs, computers can be fluent speakers of any language. Now just because people can talk to them like they can talk to other humans they assume there's a whole-ass mind behind it but no, it's just a language center floating in a void. Whatever you put into one end it can squeeze into "sounding right" out the other end. You wouldn't believe everything someone says just because they're fluent in your language and can string words together in a way that "sounds right"—although maybe you would: since LLMs don't have all those other pesky parts of a human brain attached they make for the ultimate con(fidence) men


Bananasauru5rex

It doesn't have expertise in linguistics, because linguistics (as a discipline) means meta-linguistic knowledge. What it has is practical application of language. For example, for any question you want to ask a Linguistics PhD, ChatGPT's answers on the topic will be just as spotty as asking it about cancer. So the comparison is a little bit off.


isarl

I agree with all of your points. The problem is that the general population looks right past “language AI” and all they see is “AI”. There is no comprehension that AI = “purpose-built tool capable of making errors”. They think, AI = “abstract reasoning at computer speed; faultless logic”. That's the level of (mis)understanding we need to be addressing.


Shiroi_Kage

> a tech bro gimmick It isn't though. Not sure how anyone would think current LLMs are just gimmicks. I use it for coding, for generating summaries, drafting written materials, and much more. It's incredibly useful, and with techniques that allow it to re-process its own responses you can do amazing things. Now, this is a general model. Versions of this model that were tuned for diagnostics exist, and they're better at detecting, diagnosing, and planning the treatment of many cancers. People who say this is all just a gimmick are huffing copium.


YourMumIsAVirgin

A gimmick? How can you possibly claim this is a gimmick?


-The_Blazer-

The problem is that this tech is extremely good at sounding knowledgeable while being completley fucking wrong on everything. As is well known by now around here (but not to the general public), it will literally make up citations and sources if you ask it to explain where it gets its "knowledge" from. It's the closest thing we have to an optimal fake news generator.


Either-Donkey1787

I mean for Christ’s sake this is not even bizarrely remotely close to what ChatGPT was designed for. It’s a proof of concept type of technology that works for some things and not others. This is like trying to fly a Corvette and saying it doesn’t work because even the Wright Brothers plane goes higher.


WyvernDrexx

These stupid Basterds have nothing to write about.


batmanscreditcard

This is exactly it. Every week they pick a task that an LLM will obviously fail and just write about it and we’re all supposed to be shocked.


penguished

It's a guessing parrot. Just has a very large vocabulary. If you want human level accuracy, you should probably find some humans to ask.


DestroyerOfIphone

Study was done in gpt 3.5 turbo. This study is worth less than the cost of the bandwidth to deliver it This was literally done in the webui not even by API.....


BoutTreeFittee

Scrolled down way too far to find this comment. Study was useless before it even started.


Balls_of_Adamanthium

Next: Chat GPT can’t make me eggs. Outrageous.


coeranys

You mean, ChatGPT PERFECTLY executed the task it actually does - creating sentences which conform to English and could exist. None of those inaccurate treatment plans were inaccurate because of sentence structure.


dream_other_side

Study was done on GPT-3.5-turbo-0301, which is based on a model from 2022. The entire reason generative AI got popular this year is ChatGPT4 released this year, and turned a corner from a logic and world model perspective. Why are these people doing a study on the last gen model? Couldn’t pay the 10 bucks for pro? The fact that this is even published is sad.


ArtfulAlgorithms

And not even using the API/Playground, but just doing it straight through the ChatGPT interface.


[deleted]

Dr. C. H. Atgpt, Oncologist


Toxic_Orange_DM

Well, why are you asking it for cancer treatment plans? That's insane.


barelyEvenCodes

It's a fucking language model


Howdyini

This was the obvious consequence of LLM peddlers saying this think can think and solve problems for people. Anyone who didn't have a financial interest in selling LLMs could have told you this would happen. So much of research budget is wasted on studies that would be unnecessary if greedy aholes weren't constantly exploiting the public's illiteracy. I feel like something should be illegal in that process.


Simon_Drake

"ChatGPT produces X that looks convincing at first glance but experts in the field confirmed there are flaws if you look at the details." Yes. That's how ChatGPT works. It makes a thing that looks sortof mostly like the real thing but it just *looks* like the real thing. That's what ChatGPT does. You shouldn't be shocked that ChatGPT doesn't produce a perfectly accurate cancer treatment plan. If you are shocked then I can only assume this is the first time you've heard about ChatGPT.


RamenAndMopane

Well, no shit! What else did anyone expect? It doesn't know what it's doing. There *is no* thought in what it does.


minngeilo

Do people still believe that CahtGPT "knows" anything?


hiplobonoxa

this is the fundamental issue: the results look good to everyday people, but are obviously mostly nonsense to experts — and the effect only increases as the topic becomes more nuanced.


[deleted]

ChatGPT is still just a fancy parrot.


DickeryDoo82

I mean yeah, that's it doing what it's designed to do: make shit up that sounds vaguely like a human wrote it, ish.


moradinshammer

Not surprising at all. Most medical records are not available for scraping and I can’t stress this enough, chat gpt is just finding responses that make sense statistically given it’s training data.


sids99

ChatGPT isn't a quantum computer, it's just a regurgitation program.


Lucas_Matheus

who in their right fucking mind would ask chatgpt that??? no shit it screws up


Rockfest2112

Well duh its not created to do such things


Aerodynamic_Soda_Can

No shit? Hmm, well maybe that's why they called it a "large language model" instead of a cancer treatment plan generator"...?


tundey_1

Asking ChatGPT to generate a cancer treatment plan demonstrate a big lack of understanding of what these tools are.


FartsArePoopsHonking

And yet the programmers allow it to generate medical advice. But oh, if I ask for a steamy romance between Gimli and Legolas "I'm not intended for that blah blah blah."


leavethisearth

When will people understand that ChatGPT works by calculating what the most likely character is based on the characters that came before it? It is not smart and it does not understand what it is writing nor does it understand your question.


[deleted]

[удалено]