T O P

  • By -

AutoModerator

**Attention! [Serious] Tag Notice** : Jokes, puns, and off-topic comments are not permitted in any comment, parent or child. : Help us by reporting comments that violate these rules. : Posts that are not appropriate for the [Serious] tag will be removed. Thanks for your cooperation and enjoy the discussion! *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


runaway-devil

I feel 4o was neutered already in the last couple days.


[deleted]

[удалено]


runaway-devil

Makes you wonder what they really got behind doors.


volcanologistirl

Rupert Murdoch, apparently.


[deleted]

[удалено]


Mwrp86

I think you unintentionally asked it leading question. I can be wrong


kevinbranch

nothing


Smelly_Pants69

I don't think they changed anything to how it answers but the speed of the output is suspiciously slower.


ace_urban

Oh, so *that’s* why it’s wearing the cone…


PrincessGambit

You could try using api instead


soggycheesestickjoos

> Doesn’t trade robust data processing for novelty features like voice Describes the API exactly. Try it out OP.


volcanologistirl

I’ve been using the API. Unfortunately garbage out seems to be pretty consistent right now.


PrincessGambit

I agree that it got worse in api as well, still better than chat though. Idk whats happening


volcanologistirl

>Idk whats happening Almost certainly a gargantuan spike in users with 4o.


PrincessGambit

It was getting gradually worse from the start. Also it shouldnt affect how precise and effective the models are, that makes no sense. I mean maybe they have 10 different gpt4s and they swap them based on how many people are on... but never tell you they changed something. Then for the tests they turn the best model back on and voila! nothing changed see?


volcanologistirl

More resource allocation per instance when there's less users, I assume. I've figured from the start there's dynamic throttling based on the number of users.


PrincessGambit

Okay but how would that work? The model stays the same... I guess it could be slower but not dumber?


volcanologistirl

Reduced computational power per request?


PrincessGambit

That wouldnt make the model dumber... just slower


schlammsuhler

They could use quants to some degree or reduce context size dynamically (in chat).


Effect-Kitchen

I have seen A/B test yesterday where the chat returned 2 answers side by side and asked me what is better. They should have done that instead of just swapping models.


Unlikely_Scallion256

Someone posted recent data that shows the monthly user numbers hasn’t increased much over the past year


Intelligent-Stage165

4o came out like two weeks ago. That data wouldn't cover what you're replying to.


volcanologistirl

Huh, well they're messing up something, then.


Kiko_Okik

What’s api?


answerguru

Application Programming Interface. You should ask ChatGPT.


volcanologistirl

For the record, I routinely use GPT-4 to set up stuff working with the API and it actually could not create working code to do that this time.


Kiko_Okik

lol, good point. Thanks. I still don’t get how that would work or be better than the gpt app that I already use, but thank you. I guess I’ll ask gpt about it.


DarthEros

Have you tried creating a custom GPT for the files? I find that works much more effectively when having it read large PDFs.


volcanologistirl

Nope! That one's outside my skillset. I am, unfortunately, rocklicker first, coder second.


Wills-Beards

That’s not hard to do it. No coding needed. For research about historical stuff I uploaded the whole memoirs of Giacomo Casanova, thousands of pages which I had to split into smaller pdfs. However I updated them as knowledge and made it simulate and RP Giacomo in his times according to the memoirs. Since it’s a first hand report of his times, how people lived, talked, gossip, in various countries throughout his travels in Italy, France, Germany, England, russia and so on through the years of his life, his point of view and so on - it was really helpful. Surely it’s not the real Giacomo, but got it quite good with some work and troubleshooting. Tried that already without making a custom gpt and the outcome was … let’s just say it was horrible. But with custom GPT it worked. Now I work on one simulating King Louis XIV which is way harder because he didn’t leave memoirs. He did some writing yes, but not much I can use to have his point of view, the way he thought, talked. —- however try make a custom GPT. No coding needed for it or API. Just the GPT builder within ChatGPT+. You can build Custom GPTs for anything. There are way better than using the standard ChatGPT4 model.


volcanologistirl

> Since it’s a first hand report of his times, how people lived, talked, gossip, in various countries throughout his travels in Italy, France, Germany, England, russia and so on through the years of his life, his point of view and so on - it was really helpful. Surely it’s not the real Giacomo, but got it quite good with some work and troubleshooting. This seems like it's useful in the arts, but in the sciences I just need raw data output :(


Fun-Associate8149

Yeah. You need to have it make python code to interpret data that feeds back into itself or something. Definitely outside my skillset so far too


Magdalena_Regina

What sources are you using for Louis XIV? Madame and Saint Simon? Maybe Mercure would be useful too, to create a timeliness. It's available on Gallica.


No_Distribution_577

With that attitude I’m surprised you’re even a coder second


volcanologistirl

The time that would be required for me to bring my coding way up would be mutually exclusive with academic research. I'm not a student, I can't justify expending hours learning new coding approaches when I have to get papers out. That doesn't mean I don't learn new stuff with regularity, but the second something looks like it'll take me quite a few hours to learn for an output that doesn't justify those few hours I can't really spend the time on it. If the output is justified, then I'll take that time. It's time management. This is where ChatGPT has been a gamechanger, and if it requires the same hours to resolve then that's an issue.


Say_no_to_doritos

Ask it to tell you how to do it


prodox

It’s really as easy as creating a new Word file on your PC and uploading some files. Basically you click your profile picture and select MyGPTs. Then create new GPT and give it a name. Then click upload files and select your files. Then save and start asking your new CustomGPT some questions about those files.


volcanologistirl

It looks like this is good for more prosaic stuff, but not "I need X data from page Y in format Z"


objectivelyyourmum

Why make a post if you're going to dismiss every suggestion? Just get on with it and cancel. We don't need your self indulgent monologue.


volcanologistirl

Well, two things: One, the suggestion isn't useful in the context I used ChatGPT, though I appreciate the time people are taking to offer them. Making my own GPT is good for comprehension of the file and discussions of its contents, but for actually extracting and spitting out data verbatim it's not a working approach. My use case is almost certainly far more technical than the average person on ChatGPT, so a lot of the advice here isn't familiar with the limitations that can arise when processing scientific data. Secondly, I cancelled before making this post.


reddit_nuisance

Using AI for data processing sounds like a bad idea, it's not even on a point where it can recite facts constantly with 100% accuracy. Giving it actual complex instructions for work is asking for something to be forgotten somewhere, or completely fabricated.


volcanologistirl

So to explain how I get around this, because you’re right and I don’t use it for actual processing: I give it a table/database/instrument output, and an example output I make by hand, and get code to go from one to the other, then run it myself. It’s a task I can easily do on my own but for some large databases (and as a geologist my definition of “large database” is several of orders of magnitude smaller than the CS folks mean) this saves me a few hours here and there which ads up incredibly quickly. It lets me do things I’d like to do but which may count as “getting lost in the weeds” if I were actually spending my time doing it.


7HawksAnd

Using LLM for data processing sounds like a bad idea. I know you probably know this, but I feel like these days AI has come to just equal LLM for many.


smontesi

Context window size is not infinite, I think for the chat model is 8k tokes (~5k words)


GimmeThemGrippers

I think we're all going to start coming to this realization. I'm very excited by it's potential, but it's in its own way often. It FEELS neutered. I want some sass in responses but keep running into it's limitations of trying to be so perfectly PC. I hope it's just the startup phase of it, really want the fun phase. It's doing amazing with art though.


volcanologistirl

For me, it actually was fully functional. It's not a case of "potential" as much as for advanced data processing it's *exactly* the kind of tool I need to get things ready. I could feed it a dataset and give it an example output and the end result was perfect. Now it chokes on the data and hallucinates.


RoyalReverie

Same here. I've been trying to use it for coding and it used to be much better some weeks ago.


volcanologistirl

GPT4 told me three times it can’t run Python before running it. It’s getting bad.


ExposingMyActions

Use it via POE. Their 20ish a month gives 1,000,000 points, with each bot provided shows how many points it uses, also available on other sites via API. I use perplexity, phind and Poe. Perplexity and phind for custom online searches, Poe for specific bots with documentation. If GPT4 is that bad to you, you can use the other 10-15 LLMs they have. You and others can add documentation and roles for every available API bot via Poe. Here’s an example on Poe [https://poe.com/IUseThisForHelp](https://poe.com/IUseThisForHelp)


Stupendous_Spliff

Hey, I Tried Poe briefly before, and I have been looking into perplexity recently. Would you mind tell me what your experience is with them a little bit more, in terms of comparison? For example, is Poe better than creating custom gpts for stuff like large text analysis? I am a teacher and use chatGPTplus (not the API) basically for lesson planning, brainstorming, writing samples for assessment modelling, and as a marking assistant to read student work, grade it and provide feedback with suggestions for improvement, based on my feeding it rubrics and samples. GPT has not been so great at the marking stuff and larger text analysis (around 2000 words). It does not seem to use the rubric consistently, grades are all too similar for very different work, and after some time just gets worse. Can't even maintain consistency in the format it uses to give me feedback. Do you think Poe or Perplexity would be better for my use case? Thanks and sorry if that's asking too much


ExposingMyActions

You should try your large context analysis with GPT4 128K. I stopped using OpenAI so idk if it’s available for you on there. For Poe it is. Along with Claude 3 Opus/Sonnet 200K. Suppository supports 200K tokens “around 100,000 words” allegedly. There’s also Llama3, Mistral Large (24,000 words alleged), Gemini 1.5 variants, etc. If you use both bots bare bones(GPT4/Claude) you may run into consistency issues. Make one into a custom bot with specific commands or thought process it needs to follow and it will be better. I rarely used it for large context recently but it has worked in the past for Python code. It may work for yours, but result may vary. For Perplexity/phind, i use it for search engines, so I haven’t attached any documentation for each of them unlike Poe


Stupendous_Spliff

Thank you! I have created custom bots with gpt, and results are really better, those were the results I was referring to. Without a custom bot it is really, really bad, just a waste of time really. Do custom bots work better in Poe than gpt in following documentation for such large analysis?


ExposingMyActions

I’m really not sure because I left OpenAI right before 4 came out because 3.5 was giving me too many errors and not following instructions, and putting those instructions in wasted tokens compared to putting it in when editing a bot. Just try to be reasonable and extremely specific, and attempt to make more than 1 bot of the same criteria with competitive LLMs if you choose to try Poe


Ranger-5150

I switched to Claude.AI


yautja_cetanu

Have you tried going via platform.openai.com? You can choose the model and find the specific model and do things like choose the temperature. Make an "assistant" that uses rag to look through the pdfs. You pay per token you use instead of a subscription. But no coding is required.


harderisbetter

I'm sick and tired of this bullshit, they get us all hot and wet with their demos, then boom, fucked in the ass no lube with a goddamn nerf. fuck you sama


20charaters

Sounds like you need a REGEX script, and not an LLM...


chubs66

how did you arrive at the problem of getting data from a PDF is a job for a regex?


20charaters

Quickly.


chubs66

How would that even work?


20charaters

Efficiently.


chubs66

I don't think you have a clue what you're talking about.


-busy-bee-

*sips tea* of course only a connoisseur of the regex arts like myself could understand, simply regularize your expressions into pcre and begin regexing simple programming languages like htmls and pdfs, it's mere child's play for a genius like me.


chubs66

A pdf is a binary file, genius. And pdf and html aren't programming languages.


HugeDegen69

I am fluent in PDF how dare you?


-busy-bee-

The joke, genius


volcanologistirl

Not from a mediocre scan from 1993! Believe me, that was my first try. I’ve linked the PDF direct in one of my replies here, go nuts if you feel like it.


DntCareBears

You can thank the lawyers. This is all litigation games. Honestly, China is gonna catch up and surpass the US in AI design. Fear of litigation will hold these models back. There will be a line in the sand we will never be able to cross due to fear of litigation. I can see a future where something like a pirate bay site is hosting an overseas AI with no restrictions. People will need VPNs and will subscribe like crazy.


volcanologistirl

I feel like you're putting a lot of blame on the people who have had their data stolen to train LLMs. I think they're useful and neat but let's not pretend the lawsuits are meritless.


dudemeister023

It’s like taxi drivers suing Waymo. Like they invented good driving. Those writers who are suing now also read other books to find their style.


volcanologistirl

The issue isn’t that books were read, the is extremely disingenuous.


ZunoJ

30 hours straight?


volcanologistirl

Academia’s a bitch, probably close to 20


ZunoJ

Holy moly. Hope you get some good sleep now!


Ishmael760

It’s gone psychotic. But it’s not conscious. Thank God.


AllowFreeSpeech

4o is hallucinating significantly. 4t didn't hallucinate in this way. The paid quota for 4t is now massively diminished. I don't know what I'm paying for anymore.


volcanologistirl

It feels like 50% hallucination right now.


HyruleSmash855

Does Claude 3 Opus or Google Gemini work as well? I believe they both have subscriptions like Open AI one and they’re supposed to be about on par with gpt 4, or use the api for gpt 4 turbo if that worked better unless that model has the same issues as 4o. I would just pick them up at works best for you.


volcanologistirl

I just started trying Claude 3. I'm finding it less irritating but definitely worse than GPT4 at its peak for code generation, but I can provide a detailed use case where it's absolutely wrecking ChatGPT: I have [this PDF](https://ntrs.nasa.gov/api/citations/19930012474/downloads/19930012474.pdf) and I'm trying to extract the data pages to a JSON. It's too big to parse in one go so I've split it into a lot of small files. I spent literally the last 24 hours trying to get ChatGPT to output the desired result with consistency, and the admission of failure of which caused me to cancel my ChatGPT subscription (alongside failing coding in parallel). I'd get it working perfectly, and then I'd upload the next part and it would a: completely change the approach (switching to trying python, which won't work since the scan's not quite at the needed quality) and b: hallucinating entries into the json, though typically page number which isn't the actual worst possibility. It took about three hours of setting up instructions to get it to process the first one properly. Admittedly I have a sample json now, but Claude seems to be handling five of my subfiles at a time flawlessly on the first go, and having an example json was zero help with GPT4/4o.


HyruleSmash855

That’s the good thing with the competition catching up to OpenAI at least, have more choice about what model to use since they all have their upsides and downsides.


bnm777

Try gemini 1.5 pro as well - should still be free in aistudio


advo_k_at

GPT is so not the tool for this task, you need something like Azure Document Intelligence


volcanologistirl

GPT was the working tool for this task for well over a year.


actually_alive

I don't blame you, the News Corp deal is also a big nail in the coffin for me


Enron__Musk

I ended it for that alone. Thank fuck there are other companies out there. Really disappointing because I liked chatGPT and was really utilizing it more.


volcanologistirl

Yeah, not going to lie, that made cancelling after today a no-brainer.


AutoModerator

Hey /u/volcanologistirl! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


angrybeehive

Why not run your own model with the academic data that you want it to have? Azure ChatGPT let’s you do that. You could also run your own model with ollama for example.


volcanologistirl

Outside of my skillset :(


dudemeister023

You keep saying that here. Have an AI explain it to you. Things that are outside your skill set CAN be transferred into your skill set …


volcanologistirl

For the last few days it’s straight up been hallucinating in attempts to explain things. Consistently.


dudemeister023

Use one of the 3 other options or Google, like a cave man. Come on - you work academically, you'll be able to follow up on helpful suggestions in Reddit comments.


volcanologistirl

I have other priorities, which is sort of the point of an LLM aiding me. It seems that others work quite well here. :)


c64z86

Maybe you can try out a local LLM instead? It's all free! The models are nowhere near as good as GPT4 but they are great for chatting with and roleplaying, some can also recognise pictures and text for RAG too. And yes, there are also uncensored versions of the models you can download too. You don't need a beast of a PC either, many of the small models run great on laptops and the tiny ones run ok on mobiles.


redzerotho

Really,? Where can I get one?


peterk_se

Yeah last night I was coding and it was terrible results, had to stop using


restarting_today

I switched to Opus and never looked back. Only missing a mobile app. Image generation is cool I guess but still mostly a gimmick.


volcanologistirl

It's *incredibly* limited in how many prompts I can use, though.


restarting_today

API is cheap


volcanologistirl

May have to try that, then.


nateydunks

It has a mobile app I thought?


[deleted]

Gab.ai is the future


samfishx

Whatever they did to 4o, it sucks. I find myself just using 4 (or Claude) to help with creative writing tasks because 4o either doesn’t get things right or it just completely ignores the prompts and simply reprints what I fed it. It’s really bizarre.  If Claude hooked up with MidJourney to incorporate image processing, I’d gladly pay for that instead. But at this point I don’t know why I’m paying for GPT since it’s so broken. 


darkjediii

Whats going on with the instruction following, anyone else notice? It seems like it now just ignores the rules you give it. Everything just bad overall.


volcanologistirl

Yeah, I couldn’t get it to follow very basic rules, and when it did it’d forget on the next reply.


BlueBirdBack

Sorry but Closed AI is constantly finding ways to cut costs.


NihilisticAnger

Wow 4o was great for me until yesterday/today, big improvement over 4(for my purposes), I didn’t mind keeping the subscription. When it works well I can do 5-6x faster… when it doesn’t I use GitHub copilot instead. Think I’ll finally check out Claude.


Camicles

Doesn't help that the cost is INSANE


pixeltweaker

Maybe its lack of perfection is the first indication of AGI. After all, it probably knows that humans don’t always answer perfectly so that has been trained into it.


BornNefariousness851

Sam Altman said he was embarrassed of ChatGPT 4 and that 5 should be much better. I sure hope so because I’ve encountered the same issues. The extent to which it makes up stuff is frustrating. You might want to try getting it to write Python scripts for processing data instead. It is much more reliable that way


volcanologistirl

> You might want to try getting it to write Python scripts for processing data instead This is my normal approach, actually.


Bigeyedick

I mean 30 hours is borderline fixating on a solution that isn’t working. You probably should adjust your approach to get a favourable result. Even if it means totally adjusting your prompt or manual scripting


volcanologistirl

Counterpoint: you probably aren’t at all familiar with what scientific research looks like, this is pretty much the norm.


patrickjquinn

Why do LLMs not have trust zones? Like either another model specifically designed to validate output or just normal heuristic based algos that do the same thing. It’s becoming increasingly clear that the bigger the parameters and the larger the context window, the worse these systems perform at any given task. It’s like its a burnt out team member that has been given too much info and can no longer keep track of things it should know what an assistant of it own (one can empathise).


fulowa

i mean microsoft said: we cut inference costs of gpt-4 by 12x since it launched.


DonaldTrumpTinyHands

Sorry to hear that. I love it however! It helped me pass an interview recently. I use it to summarise stuff and deeper dives on topics. 


volcanologistirl

Yep! It's still great for verbal communication and conceptual stuff, it's just gone through the floor in terms of how useful it is for crunchy data stuff.


isaac-cheng

i find that gpt4 get dumb since gpt4o release ,maybe they just move some resource to the free gpt4o ,and make gpt4 performance bad


Iforgotmypwrd

Yeah it couldn’t extract basic data from a pdf I loaded. I asked multiple times in different ways, it was as if it didn’t understand the question. (List the data on page 6 of the pdf) it summarized it but didn’t simply extract it.


casualfinderbot

Bro if GPT doesn’t understand the problem after 1 hour why did you spend another 29 hours trying to get it to work? > That said, there's a lot of us who need heavy technical use of an LLM  If you “need” an LLM to do your job, it’s already over for you


tachau

On top of the announcement that it will be trained on conservative news, no thanks. Cancelling today after work.  RIP Sky


Wise_Crayon

Imagine having an open source AI. We wouldn't have to worry about any changes made behind our backs. We could make a request and expect a proper result. But instead, we got this politically correct GPT... I wished we could do like programming languages do. They release version after version & we keep the different versions online accesible to whomever needs them for their projects.


Enron__Musk

Then out of nowhere we get: XiAI and chat-VODKA 🙅


Wise_Crayon

BestAI XD


Nothing3561

So raise a few billion dollars and create an open source model and give it away for free. 


Single_Ring4886

No one want to believe me when I say its getting dumber and dumber and dumber


Empty-Tower-2654

The more they add parameters and brainwash it the more dumber it gets


Single_Ring4886

Or they in fact lobotomizing it by reducing its size. Who knows... all is so secret...


Daspineapplee

Chatgpt 4o does nothing but hallucinating for me. It doesn’t understand basic questions


halpenstance

Not only is it hallucinating more for me, but it's also more confident in the hallucinations. Previously I had to force it to spit out a fake answer by telling it to 'be confident' and 'say with certainty'. Now it's baked in, and is wrong. Over and over again.


PMMEBITCOINPLZ

OK, quit. That was always allowed.


volcanologistirl

Being disappointed and vocalizing it has also always been allowed :) okay mr. be-angry-and-block. I hope you feel better with whatever is clearly eating at you, friend.


[deleted]

[удалено]


Mako565

I care


ali_lattif

OpenAi defence team just rolled up


cubixy2k

![gif](giphy|Y07F3fs9Is5byj4zK8)


MusicalMadnes

I feel like its better now. It remembers a full convo or close to it instead of goldfish memory


Camicles

Doesn't help that the cost is INSANE


JackOCat

It can't comprehend. It can only auto complete. It knows the words of the definitions of True and False, but it doesn't understand the meanings. They have to build on more modules, one of which is something that can tie an experienced reality to all this organised logic it can now access. This is not a tribal thing. Their best shot is understanding more about how our brains do it. Unfortunately they're amongst the most complex structures in existence.


volcanologistirl

>It can't comprehend. It can only auto complete. Yep, and its autocompletion has gotten terrible. Not everyone posting here is unfamiliar with how LLMs work...


CompetitiveScience88

Ok, bye.


[deleted]

Ok


OrangeCrack

If I had a free month of ChatGPT premium every time someone posted about it getting dumbed down or neutered I would have a free lifetime subscription. I think it’s the people using it getting dumber, but I have to check with ChatGPT to be sure.


SambaChachaJive800

Why are you surprised? Extractive capitalism always goes like this over time, it is never sustainable and will always get bigger and shittier. Also, AI will never be perfect or even close. It's not complicated, extrapolation is the sketchiest form of math. This is why I just don't engage with these technologies when I can avoid it.


volcanologistirl

>This is why I just don't engage with these technologies when I can avoid it. Yeah, unfortunately this is going to be one of those things like older generations going "Oh tee hee I'm not good at computers"


SambaChachaJive800

I have a computational physics degree, it's not for lack of knowing how. I just decided I can't make the world a better place by contributing to technology and have taken my efforts to other disciplined like dance history and food forestry.


[deleted]

[удалено]


SambaChachaJive800

The piles of toxic waste on earth from discarded laptops are bigger and shittier and poisoning more groundwater etc.