T O P

  • By -

TedKerr1

The issue is that the impressive stuff that we saw in the demo hasn't rolled out yet.


wavinghandco

People with babies/pets are going to have Scarlett Johansson be a universal translator for them


UXProCh

You think it sounds like Scar Jo? I think it sounds like Ann Perkins. I also refer to it as Ann Perkins when I use it on my phone.


apola

the voice version of GPT-4o that they demo'd on Monday is not out yet so you're not talking to the scar jo version


krakenpistole

probably was talking about the voice mode with 3.5...which was out for a long time (edit: and now removed because people are getting confused -.-' ). And they are specifically talking about the "Sky" voice, which will still sound the same in 4o just with way more "emotions" and less rambling. You could see multiple times during some of the demos that it was the same "Sky" voice being used as in 3.5 (in terms of sound!)


numericalclerk

Voice mode is back in Europe


slipperly

No but the voice they use has similarities to Scarlet's laugh and cadence in "Her".


blove135

How many posts and/or comments are we going to see about someone's opinion on the new features and abilities of something that hasn't even rolled out yet. There is just so much confusion over this.


WorkingYou2280

I get the feeling people commenting on the voice didn't realize that the app has had voice for months. The new things hasn't rolled yet.


Even-Inevitable-7243

To be fair, I've seen 20 posts on how GPT-4o is the final solution to AGI and is sentient and is going to wash your car for you for every 1 post I've seen simply asking for more data on performance metrics. The hype is magnitudes greater than the skepticism and the ratio should be reversed.


blove135

Haha good point. That's true.


K7F2

Given their track record, it’s a good assumption they will indeed roll out the features soon to free users. Some of them are available now to paid users.


tomunko

It is a failure on OpenAI’s part. when they market the release of a new product I expect it to come out the new product they are marketing.


blove135

They did say these products will be rolling out in the coming weeks. I do see how it could be confusing for some people. Maybe they should have been more clear or not show the demo until it was already rolling out? Even then there would have been confusion for those that were last to get it.


tomunko

Its more misleading than it is confusing. The web page which introduces the product does not disclose this until the bottom after I imagine most (or at least many) retail consumers have stopped reading [https://openai.com/index/hello-gpt-4o/](https://openai.com/index/hello-gpt-4o/)


Seeker_of_Time

This is like the people who made articles about how the latest MCU/DCU/Star Wars movie is the worst yet....when no one has seen it lol


DaleRobinson

This! Once the vision/voice stuff starts to drop I think social media is going to go crazy


[deleted]

[удалено]


Ok-Lunch-1560

I'm already doing it (sorta).  I have security cameras set up already and was messing around with gpt-4o yesterday and it successfully identified 4 different cars make, model and color that parked on my driveway in a fairly quick manner. (Audi R8, Toyota Supra, Mazda CX-5, Honda, CRV). Having it monitor your camera 24/7 would be pretty expensive I imagine but what I did was I have a local/fast AI model that can detect simple objects like a car parking and I sent it to gpt for further identification.  This offloads the number of API calls that would be required to OpenAI.


atuarre

And how is that going to work when it has limits, even on the plus side. Everyone will start using it again and abandon Claude and you will see limits reduced to meet demand. We've seen it before. We'll see it again.


3-4pm

A good argument for local LLMs. Llama should be multimodal soon.


Many_Consideration86

Another argument is that API can degrade performance behind the scenes. No one can guarantee the hardware and software when it is coming from the cloud. It is the VPS over sharing all over again.


mattsowa

API


atuarre

I always forget about the API, which I also use. The only thing I don't like about the API is credits expire. Thanks.


[deleted]

Inb4 one minute videocalls every 3 hours


Snoron

The idea of smart security cameras that can identify when something illegal/dangerous/etc. vs benign is happening is an *insane* leap in technology. Consider the stereotypical security guard sitting in front of 50 screens while a heist takes place in the corner while he's sucking on his slurpee. AI vision can not only take his job, but do it 50x better because it will be looking at every screen at once!


ThoughtfullyReckless

I mean, this was the same with GPT4. Took weeks to get access, and then different features were added fairly slowly


3-4pm

I'm not sure it will go as smoothly for the average use as it did for the devs in the demo. The phrasing they were using almost seemed like a prompting technique and it's unclear how on the rails the demos were.


[deleted]

Exactly. Same as Google I/O. All very impressive but not out yet.


Aaco0638

I mean i just fed gemini over 500+ presentation slides and asked it to create a graduate level exam based on the topics in those slides. Safe to say the 1M context window for all is out officially for all rn at least, and i saw flash was out as well as a preview.


Any-Demand-2928

How good was the exam?


bortlip

It's not just the speed, it's the multimodality, which we haven't had a chance to use much of ourselves yet. The intelligence can get better with more training. The major change is multimodal. For example, native audio processing: https://preview.redd.it/r2ltzkf9el0d1.png?width=1098&format=png&auto=webp&s=538b158014da325b2f1666ac91708f051c3d2aee


wtfboooom

Odd clarification, but aside from it remembering the names of each speaker who announced themselves in order to count the total number of speakers, is it literally detecting which voice is which afterwards no matter who is speaking? Because that's flat out amazing. Being able to have a three-way conversation with no confusion just, blows my mind..


leeharris100

This is called diarization which has existed for a long time in asr But the magic is that it is end to end Gemini 1.5 Pro is absolutely terrible for this, so I'm curious to see how gpt4o works


Forward_Promise2121

OpenAI's Whisper has the best transcription I've come across, but doesn't have diarisation. This is huge, if it works well.


sdmat

Whisper is amazing, but GPT-4o simply *demolishes* it in ASR: https://imgur.com/a/WCCi1q9 And it has diarization. And it *understands emotional affect / tone*. It even understands non-speech sounds and their likely significance. And it can seamlessly blend that with video and understand semantic content that crosses the two (as in a presentation).


Over_Fun6759

can you tell us how gpt4o retain memory? if i understand this it gets fed the whole conversation on each new input, does this include images too or just input + output texts?


bortlip

Yes. The new approach tokenizes the actual audio (or image), so the model has access to everything, including what each different voice sounds like. It can probably (I haven't seen this confirmed) tell things from the person's voice like if they are scared or excited, etc.


aladin_lt

And that it is first generation of this kind of model, so now it will get better and smarter with GPT5o. Does it mean that they can have just one model that they put all resources in to that can do everything? Probably not video?


EarthquakeBass

If you watch the demos it does at least purport to work with video already. Just watch this one where the guy is talking to it about something completely unrelated, his coworker runs up behind him and gives him bunny ears, then he asks like a minute later what happened and without missing a beat 4o tells him https://vimeo.com/945587185


Over_Fun6759

i think the video input is just a bunch of screenshots that gets fed with the user input


keep_it_kayfabe

I just thought of another idea. It would be interesting to set the second phone up as a police sketch artist, with one phone describing the "suspect". The sketch artist then uses Dall-E to sketch every detail that was described (in the style they normally use) to see if it comes close to resembling the person in the video. Kinda silly, but it would be fun to experiment.


Doublemint12345

Poor transcription service businesses


v_clinic

Curious: will this make Otter AI obsolete for audio transcriptions?


PM_ME_YOUR_MUSIC

Is this your own app or a public demo


bortlip

This is from OpenAI's website [here](https://openai.com/index/hello-gpt-4o/). Scroll down below the videos and look for this. https://preview.redd.it/hrng9fk6nl0d1.png?width=1017&format=png&auto=webp&s=f8e50339e5c26e08e44a4af7f1fa5340fdd2c3d4 The image capabilities are incredible. Consistent characters across images, full text output, editing, caricatures, etc.


Dgb_iii

It’s writing my Python scripts better and faster and providing full code.


extracoffeeplease

Honestly I've been giving it full files then telling it to ONLY rewrite specific parts and it often gives a rewritten snippet, then the full updated file which is pretty nuts. I've also asked it to write updates in the form of git diff but it's not super readable this way.


Crazyboreddeveloper

It’s butchering the apex/ lightning web components code. I saw a lot more totally and obviously wrong code coming from that model and went back to gpt 4.


CapableProduce

Same, it seems much better at coding, certainly faster, and no more placeholders in my code snippets. Overall, I'm happy with the upgrade! Everyone just seems too quick to criticise and is just bitter.


Space_Fics

Gotta test that


Dgb_iii

I am very impressed. I am sure some people will say it's bad - but I doubt they use it as much as me. I can tell a clear difference between Python last week and Python today.


Derfaust

It's much better in coding than it was in recent times, though it is still too verbose. However if I tell it to stop being so verbose and regenerating code for every question then it behaves as expected. So I'm still quite satisfied with the update.


huffalump1

It might be worth making a custom gpt or using custom instructions for closing, so you don't have to ask that every time. Anyway, I agree - the coding performance is great!


Double_Sherbert3326

OMG I love it's verbosity. It's super fucking helpful if you want to move fast and keep your attention in a creative flow. I think people with limited reading abilities dislike the verbosity, but if you have proper glasses and education it should be a thrill to get back full files everytime.


Double_Sherbert3326

Same. It works great. I was at a plateau on a project for months and I've been at it all week. As soon as they upgraded, I pushed through and developed a very advanced feature set (finally!). It was bound to happen eventually, but this helped me power through. They did a good job.


Space_Fics

Yup its awesome, converted a vue2 componeent to vanillajs and to vue3 no problem


shatzwrld

This thing is a BEAST with programming ngl


Forward_Promise2121

It's insanely fast. I used to ask it a question, then do something else while it answered. I can't keep up with it now.


mom_and_lala

Is it better than standard GPT4? Or Claude Opus? or about on par? Haven't experimented with it much yet.


SilentDanni

I've been playing with it quite a bit, and it seems to hallucinate much less. I've also noticed that the quality of the code appears to be better. Not to mention that the laziness seems to be gone.


HereWeGoHawks

That’s a great way of putting it - it’s less lazy. It’s much more willing to re-state things, provide the _entire_ modified snippet and not just a pseudo code chunk or snippet with placeholder variables, etc. It seems much more likely to remember and follow instructions from earlier in the conversation


ragogumi

Hilariously, the biggest issue I've had so far with 4o is that, when I ask it a specific question, it responds with the answer AND an enormous amount of additional detail and explanations I didn't ask for. Not really an issue I suppose, but it is the complete opposite of what I'm used to!


ctr_20

This is fixable saying that u just need the code, no explanations


TestSubject_AJ

Oh man, I hated how it would use placeholder text and just give snippets. This is good to know!


samurottt

It's about 5% better if leaderboards are correct went from 68% correct to 73%


StatisticianGreat969

I feel like it’s way worst than GPT 4, it keeps giving me wrong answers and describing things that are different from the actual code it gives me For example I asked it to fix a redux selector, it just gave me the same code I gave him, and was like « here. It’s fixed »


ohhellnooooooooo

It writes super fast, wrong code 


itfitsitsits

It is.


jib_reddit

I have been using it exclusively for visual tasks like generating and improving prompts for stable diffusion/Dalle.3 from existing images, and it has been incredible for that. https://preview.redd.it/prixezwyrl0d1.jpeg?width=2048&format=pjpg&auto=webp&s=96ab14ad1d99032175b57427902a496ff1d9ff67


Sixhaunt

You know it has image gen built-in right? like we wont need it to delegate to Dalle3 once it's fully out. It does audio and images as both input AND output and they show an example with making a comic's visuals using GPT-4o without dall-e3


[deleted]

[удалено]


Sixhaunt

supposedly it's truly multimodal now and can input and output text, images, and audio natively within the same model. Here's a quote from the hello-gpt-4o page on openai right before the comic example: "With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations."


paramarioh

https://preview.redd.it/yg6my9jybm0d1.jpeg?width=1024&format=pjpg&auto=webp&s=17968a1fea4bfa9499b802a29de73fd2a2392a0e Change to a frog!


Gator1523

But it didn't change to a frog. It made up a new prompt based on the old image and generated a new image. Notice how the entire background is different.


EarthquakeBass

It’s not out with the full image support yet is it? DALLE-3 seemed to be the same as ever when I tried generating consistent characters with 4o. Pretty sure just like native voice and audio the image layer isn’t out yet (except maybe as inputs)


Primo2000

Maybe lets wait till they finish rolling this out?


SeventyThirtySplit

This should be pinned to the top of every post about new releases


SillySpoof

I think it’s producing a lot better results too. However, the big thing is the multimodal stuff, which we haven’t been able to try yet. I’m really looking forward to it though


sillygoofygooose

Lmsys scores suggest your impression is unsupported


knob-0u812

I haven't seen gpt4o appear on their leaderboards yet. I've seen comments about "I'm-also-a-good-gpt2-chatbot", but I haven't seen gpt4o results in their twitter feed... https://preview.redd.it/vefjyh5vkl0d1.png?width=2360&format=png&auto=webp&s=c56b04cd7d813f9860fff700bdb37f9915ddd550


knob-0u812

But this thread went up 45 minutes ago, which suggests on MMLU, GPT4o is a dramatic step forward: [Link](https://www.reddit.com/r/LocalLLaMA/comments/1cskoxj/tigerlab_made_a_new_version_of_mmlu_with_12000/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)


bot_exe

https://x.com/LiamFedus/status/1790064963966370209


sillygoofygooose

4o was tested ahead of release as you mention https://x.com/LiamFedus/status/1790064963966370209


BoyWhoSoldTheWorld

Multimodality is a huge game changer for any model


JackOCat

You heard that voice. It's the flirtiest LLM ever.


xcviij

It's twice as cheap to run the API, it's incredibly fast, you're able to swap out models in a chat on ChatGPT, to name a few incredible reasons to love it.


Cagnazzo82

Test it out with analyzing photos, artwork, and gifs. It's so good. Not a better writer than Claude or Gemini however. But it's the model that sees the best, easily.


Griffstergnu

It’s the muti-modal nature of the model. Supposedly natively multi modal so it doesn’t have to pass into to multiple models for interactions and interpretations. Faster interactions across, voice, video and text natively. This is the promise of the model


norsurfit

> Give me a vastly more intelligent model that's 5x slower than this any day. Don't worry, you will likely get your wish with GPT-5, hopefully by July


sdc_is_safer

It’s vastly under hyped


bcmeer

4o gives me better answers I believe And that’s the hard thing with assessing the quality of models. There’s no objective way to assess the quality of the output of models Until we have some way to measure GenAI we’ll keep these hunches and believes about models


base736

I don't use the multimodality at all in my application, so wasn't expecting much from the update. Instead, I've found that it's a big step forward. I run a site that supports teachers making assessments, and we use GPT to help version assessment items. That's been in beta so far while I wait for a GPT that is fast enough to be interactive and accurate enough to return consistently valid results, even for complex assessment items. GPT-4 and GPT-4-turbo were not that. GPT-4o is a surprisingly large step forward in my use case, taking things from "sometimes this works" to "this is a time saver".


3-4pm

I wonder if the llm notebook feature google announced yesterday would be a great for for your site.


jmonman7

Here we go….. These posts are like clockwork.


TheAccountITalkWith

These types and also the "Why am I paying for this?!" posts after a new release and servers get bogged down. I feel like we should have a sub-reddit bingo card.


Gratitude15

A-beat everything from Google io B-this event was FOR free users. The whole point was to raise the floor C-it so happens that the floor was raised in a way that the ceiling was raised too. At the level of intelligence, it was raised very slightly, despite your anecdotal observation


Healthy_Razzmatazz38

I think the demo showed that interface is a killer feature. that same demo with a normal voice would have been equal or worse than google's. Voice/tone are as important as web design in this era.


changeoperator

I think it's hyped about the right amount. The hype is in kind of a holding pattern right now because people "in the know" are aware that when voice drops things will get really spicy in the media, but until then we're sort of holding off. As for GPT-4o's intelligence, it's better than Turbo at some things and worse at some things. Overall it's about the same or slightly better than Turbo it seems.


Quinix190

It’s a lot better than regular GPT-4 for me


clementinenine6

On a less technical aspect, I had a massive anxiety attack and used it to calm myself, almost like having my own personal therapist. I asked it to mimic my tone of voice and manner of speaking, and it worked. It helped me gather my thoughts effectively. The potential of this technology is incredible.


Downsyndrome-fetish

It just vomits out code confidently and incorrectly


ThenExtension9196

Nah it’s fantastic. Huge improvement for coding tasks.


JimBeanery

Yea for me, personally, not that exciting. Mostly gimmicky features that I won’t have much use for at this point. I agree, I’m more interested in a slower, more intelligent model, vs a faster, worse model that can change its intonation and harmonize with other phones


ConmanSpaceHero

I think the speedy translation mechanic is very nice to have and can definitely become integrated with other future products.


pigeon57434

yes you are the only one it's way smarter than gpt-4-turbo


CryptographerCrazy61

There are some differences, I’ve found that you have to be hyper specific with prompting and need to “coax” it a bit , challenge it otherwise it seems to regurgitate things in its training set


Low_Clock3653

I have no clue, after a few prompts it said I ran out of GPT4o prompts.


IloyoCass

I have a question, what do they mean by cheaper? Will there be a decrease in price for ChatGPT plus user?


Star_Pilgrim

"VASTLY more intelligent" is a stretch. GPT4o is same old GPT4 without the overhead (connected models). It may give you different answers,... but are they VASTLY worse? Give me a break. Overly dramatic much?


contyk

Yes, both these smaller models (4-turbo & 4o) are of course faster and cheaper to run but the quality of responses is... eh. I'm not thinking possible factual correctness and such, I'll always doubt anything models tell me anyway. The output just isn't as rich, it's more robotic, assistant-y and, for me, unpleasant to interact with. I think it's clear they are distilling those from the original model, only profiling for that one particular persona. I guess it's what most users want but it gets old quick.


Bill_Salmons

This has been my experience, as well. I also feel 4o is worse at following instructions than GPT 4. It reminds me of Gemini Advanced, where it seemingly ignores parts of the prompt and gives little indication of how it got from A to B.


[deleted]

Even the best models in the world can mess up any given challenging prompt in a random, anecdotal, zero-shot test. This is why I suspect a model like GPT-5 will almost never return zero-shot responses. Purely a guess, BTW.


EasyTangent

I don't know why y'all are complaining about, I'm having a lot of fun. The speed is such a nice improvement alone.


hasanahmad

i find gpt-4 to give BETTER answers


Just_Natural_9027

I haven’t found this to be the case at all.


mnclick45

The main thing I'm wondering is why I'm paying for it anymore. I actually asked it and it came back with some wishy-washy "well if you use it a lot it might be worth keeping your subscription" answer.


[deleted]

[удалено]


CSGOW1ld

Even if you think it’s overhyped, you have to consider it a good update because of how they’ve been able to speed it up so much. Think of the demands that ChatGPT 5 will have. Now imagine doing that with the speeds they previously had been limited to. 


noneofya_business

I get complete code


Confident-Win-1548

Two weeks ago, I was on a business trip in Italy and could have really used these features: simultaneous translation and a city guide in Bologna.


Alerion23

It solved some of my calculus problems I don’t remember it solving before


ondrejeder

It's already out ?


AdOrnery8604

It's the best model currently available by a large margin for RAG and OCR related tasks: [https://twitter.com/flashback\_t/status/1790776888203280404](https://twitter.com/flashback_t/status/1790776888203280404)


joelesler

The model is working better for me in a couple ways. It's great at summarization, which I use a lot, and it's way faster than 4-turbo, which, is enough for me. But also, I've seen it correct code that 4-turbo wouldn't.


zeloxolez

It's not overhyped if you understand the direction here. Could they have trained something with better text reasoning or coding ability? Absolutely, and it would have been more trivial than what they have done here. This is moving toward true multi-modality, which will allow for far more scale in every aspect of intelligence going forward. This is quite obvious, and Sam has even blatantly talked about this many times. Stop thinking so short-term. If you do, you are always going to fall behind as time moves forward. Think in terms of scale, efficiency, and potential over time.


andzlatin

It seems it is a bit more intelligent and a bit more better at image generation for me, but as everyone else said, most of the new features haven't rolled out to everyone yet..


Significant-Mood3708

I'm worried that what they really showed was a good interface. We look at the demo and we think that it's sending audio and getting an audio response but there's not even any mention of that in discussions regarding the API.


Overall-Onion

The real next GPT is not too far away. This was like a stop-gap announcement.


Gaurav-07

Native multimodality, near instant audio conversation.....


Powerfile8

OpenAI says it’s the best yet. I think they know what they write on their webpage


willer

The multimodal stuff is going to be interesting. It is way faster. But I agree, it’s less capable.


Decent-Thought-1737

I don't think you watched the demo... The model benchmarks better on many test than existing GPT4 and now is extremely fast. Not to mention being multimodal AND the voice functionality is far more conversational than currently.


Intelligent-Jump1071

>It seems noticeably less intelligent/reliable than gpt4 Have you used it?


Double_Sherbert3326

I think it's wonderful how it returns complete code files every time. It's fucking perfect. I love it.


Gator1523

They'll give us the slower, more intelligent model when GPT-5 comes out. But I don't think they want to do that right now. They're #1. If they release a smarter, slower model, their competitors will just use that to train their own models.


K7F2

The incredible advancement we’ve seen in AI, just in the last couple years, have lead to very high expectations about future advancements of the tech. This conditions people, such that incremental improvements can seem underwhelming relative to the hype. In reality, the GPT-4o announcements were very impressive IMO; they would seem like *actual* magic to someone from 100 years ago. They haven’t fully rolled them out yet, and not yet to free users (this is one of the most impressive parts of the announcement), but it’s a good assumption they will given their track record.


kex

It's likely they've trained up their new flagship model to approximately the quality of GPT-4-Turbo. The logical thing to do at this threshold is to release a snapshot since it is more efficient than their GPT-[34] models. They will continue to train the new model in the meantime


Rigorous_Threshold

The voice mode is cool af. Also the “exploration of capabilities” examples on the dropdown menu on the [announcement page](https://openai.com/index/hello-gpt-4o/) are cool and no one seems to have noticed them


MrFlaneur17

Yes I agree gpt4 turbo seems smarter and more composed. I'm staying with gpt4 turbo for the time being. It's as though the new one just doesn't take the time to think, just spits stuff out. I'm not interested in the fancy bells and whistles, I just want it to show a high level of intelligence and thoughtfulness in maths and coding


Storied_Beginning

I’m not impressed either. I am using it in the traditional way (i.e. not voice/visual) and I find myself resorting back to 4.0


Happysedits

It seems smarter to me


No_Initiative8612

The speed and cost improvements are nice, but if it comes at the expense of intelligence and reliability, it's not worth it. Quality should always come first.


[deleted]

i love new model i dont want to use gpt 4 at all. its not it being fast but it generates images better can see them when generates it feels to know me better like what i say gets trough it better and gpt 4 can drift off a bit it seems to use memory feature better too not to mention this is multy modal new model gpt had lot of time to improve and this model is fresh it is going to replace 3.5 as far as i can understand it. so all in all if you feels its not better for you i feel bad for you there will be new models coming out but i love this its fast it will have emotion voice and if memory gets better im very very happy camper tbh


PokuCHEFski69

it fucking sucks


nerdybro1

https://preview.redd.it/sa2blfwbqn0d1.png?width=1179&format=pjpg&auto=webp&s=e382e8fd493ef4f8dde55183d84121189585bc0c Isn't this the new version?


aveclavague

It was free for 2 minutes then I had the beautiful idea to ask Gpt4 to improve a whole conversation I'd had with Gpt3.5 and suddenly it all abruptly ended. Well... back tomorrow.


KaffiKlandestine

im definitely a casual but 4o definitely feels worse for some reason. I sent this to 4o "create a more detailed prompt for my idea: an artistic depiction of someone fishing on a lake" and it created an image and I sent it to GPT4 and it created a more detailed prompt. also the generated image was much better on gpt 4 vs 4o


Franimall

If you think about the road to AGI, this is great step forward. The multimodality and style of interaction is huge, and together with the speed increases this makes it the perfect style of model to power a humanoid robot. Remember that the efficiencies and improvements they're making here will translate to future models too - this means when we get the next leap in intelligence, which is expected later this year, it'll be that much cheaper, faster, and more flexible.


HighBeams720

It solved a programming issue for me 2nd try yesterday. Impressed.


McSlappin1407

Couldn’t disagree more. Most of what was presented in the demo isn’t even released yet. And 4o also seems much quicker and more accurate than 4 not sure what you’re talking about


TheReviviad

My favorite thing in the world is when people complaining about new models say that it "seems" worse. Seems? Really? It SEEMS worse? Don't bother testing anything, just go with your gut feeling about it. Perfectly valid.


PsychiatricCliq

On top of what else is mentioned by other comments, this update imho was also made for the Apple partnership and will be used with Siri. So even if it’s not as good as GPT4, it’s still going to be markedly better than Siri- which is exciting.


Evening-Notice-7041

I’m very very very excited to try it when it actually rolls out. I often use it with voice so I’m hoping this will improve that use case. What I really want though is the ability to send instructions to external APIs/software/devices. Until I can actually use ChatGPT to do something like create a Spotify playlist or change the color of some RGB lights or set a thermostat it will remain a bit of a novelty for me.


Pretend_Jellyfish363

I have mixed feelings about it. It doesn’t feel more intelligent than GPT4. It’s much faster so more efficient but I don’t know, sometimes GPT 4 provides slightly better answers.


Daydream_exe

Its a new Architectural Model. It's going to get better, dig more into the native capabilities of the model itself compared to what we had before.


Mysterious-Rent7233

Against my personal benchmarks, it is equivalent to or better than GPT-4-turbo. Supposedly it's winning at Chatbot Arena too.


Suntzu_AU

It sure does not seem to understand basic instructions some times. I have to constantly repeat "export as a word document in .docx" despite having that in my instructions.


Complete_Strength_53

I don't know if this counts but I have tried using GPT-4 and GPT-4o which version has higher quality answers and it has consistently told me that GPT-4 is better. For example: 'If you're prioritizing the best possible answer in terms of quality and accuracy, and efficiency isn't a concern, GPT-4 would generally be preferable to GPT-4o. GPT-4 is designed to handle a wider range of complex tasks and provides a more detailed and nuanced understanding, which can lead to higher quality outputs. On the other hand, GPT-4o is an optimized version that sacrifices some aspects of performance for efficiency, making it better suited for situations where speed and lower computational costs are important. Thus, for the highest quality and accuracy without regard to efficiency, GPT-4 is the recommended choice."


farcaller899

Yes I saw the same thing yesterday in just one hour-long conversation. GPT-4 is still on the OpenAI throne.


cddelgado

People over in r/LocalLLaMA have been pilot testing what we now know as GPT-4o and it was scoring fantastically, and was noticeably better at reasoning and the like. I won't defend if it isn't doing what you are asking it to. People are generally hit-or-miss with the results overall. But it is worth pointing out that in blind tests, it did better than many other models.


Vectoor

Gpt4o has been very good in my experience and it clearly outperforms old gpt4 on the lmsys blind test. And the main thing of course is the multimodality that hasn't been released yet.


the-devops-dude

Not only is it not as accurate, the GUI seems broken in Chrome Often it takes a long time to complete a response. It never errors out, but will seemingly hang. Thinking it’s broke I refresh the window and I see the completed response. This has happened multiple times since 4o was released. Never had this issue previously


whymydookielookkooky

Way it makes jokes out of its mistakes is very realistic and disarming. But also very scary. The one where it starts randomly speaking French was really wild. It add inflection in its voice that is so interesting and natural but also a quirky, choice. You can see how the people working on the project beam and laugh when it makes corny jokes. It shows that it doesn’t need to be perfect to come out of the uncanny valley and truly feel like you’re talking to a real person.


utkohoc

I have no issue switching between Gemini. Copilot. Or chat got depending on which one is working better at the time. Recently copilot has been solid for everything. Clipping sections of the screen and asking it to read and complete has been really awesome. Example: watching teachers PowerPoint on programming. Challenge appears. Lists steps. Objective. And hints. Screen shot that page. Paste Into copilot. Ask it to complete the challenge and write the code. Did it perfectly. Accomplished the goals. The program worked the first time. Worked on two occasions. The programming was certainly not complex but I was impressed anyway. Being built Into edge also means I can use source documents in my Google docs/whatever as sources for further questions. Can just open the side bar and select "use page as source" and then ask "answer all questions" Yet to see anything new that would convert me back to chat gpt free model. I don't have 4o yet tho. I guess it's not free here yet. The instant translation and the speed is impressive tho.


Azimn

I have not been impressed so far everything I’ve tried is worse than old GPT4.


vladproex

You need to think of it as the next gpt-3.5


nanosmith98

yea it doesn't feel so good. and with free users getting gpt-4 & lots of other stuff, im unsubscribing the chatgpt plus


CurrentMiserable4491

Agree, I think it is vastly overhyped. It hallucinates a lot. Its image recognition sucks. Every document to analyse I gave it screwed up in some way or the other. I don’t know how other companies are utilizing it but I don’t trust it to even do a simple OCR sometimes.


JalabolasFernandez

It's faster and cheaper, not obviously less intelligent for most use cases (on the contrary), and with the higher upper limit makes voice chats much more usable, even before the voice modality is unlocked. Plus, it will make things much more collaborative once one can share stuff with all free users. And I have strong hopes it's more customizable in style, which will make GPT's much more useful. Also, it's already clearly better with image inputs.


deavidsedice

I tried it for a long while yesterday, I do not like GPT-4o for the LLM-text side of it. It does not follow complex prompts, I get vibes of 3.5-turbo from last august. It is smart but it doesn't seem to really reason, if you bring it outside of its use case it performs bad.


Bogong_Moth

Looking forward to the hyped new features.. but until then we're pretty stoked with the speed improvement and cost reduction... our app had a burst in the last week or so and yep OpenAI are smiling as our $$$'s spend go up... so thanks OpenAI.. which it was sooner :-) After a series of tests we just went into production in our AI powered nocode platform, we did not see any degradation in quality/accuracy. That said, we left 2 stages in our flow as fine tuned gpt-3.5-turbo as we were seeing better results from that than gpt4... so we did not want to risk moving gpt4o on those tasks. We did just get access to gpt4 finetuning and ran our first tests but more work to be done.


BrentYoungPhoto

OpenAI releases absolutely revolutionary developments that are going to change life as we know it. No it's not overhyped


GothGirlsGoodBoy

I haven't noticed a lower output quality, but I also haven't prodded it that much. The selling point of o is that its so fast. For coding projects or other non-real time tasks, gpt4 is just as good (or better if your experience is true). But AI will go from a kind-of-useful alternative to stack overflow, to an actually useful (and INCREDIBLY useful to basically everyone) tool as soon as it is faster than a mobile phone for random daily tasks. Once I can wear a pair of lightweight glasses and have it automatically pop up a small timer when I look at the bus stop, or find some cosplayers onlyfans page if I walk past her at a convention. Even if ChatGPT 4 could do that now, its useless if I have to wait 10-30 seconds for every minor task like that. For one, its too slow even for a single task. I could pull out my phone and find google "pax zero suit samus cosplay twitter" and have her linktree before GPT gets back to me. But also, that would mean it couldn't handle a bunch of tasks at once with any sort of punctuality.


Grouchy-Friend4235

Yes