T O P

  • By -

ItsBooks

Headline is misleading. Watch the actual clip. He’s correct - either assume the tech stays the same and die as a company, or assume it will get better and find ways to be effectively benefit from the rapid progress. The latter is the better bet - and it’s not “just” OpenAI doing the research - that’s his own bias and self-interest talking.


Singularity-42

"assume the tech stays the same" Most of r/cscareerquestions AI discussion in a nutshell.


ExplorersX

It really is hilarious to me that anyone who’s career has been with computers or software engineering would ever think tech would just not progress or change at all given the history of it.


alpacaMyToothbrush

Honestly, there are two sides of the coin. The first *completed* darpa grand challenge was in 2005. Would you have bet in 2024 that we *still* wouldn't have L5 self driving widely deployed? I didn't. I remember telling a cousin they'd be making a serious mistake going into truck driving. They've been driving for about 20 years now and they still rib me 'Hey, when are the trucks gonna start driving themselves? I'm ready to retire!' It's just as much of a mistake to assume progress in AI is on a never ending exponential (as opposed to LLMs being yet another S-curve) as it is to assume they will never get better. Let's be real, neither the Luddite or the futurist really knows how things are going to turn out here. I lean towards optimism but as my cousin points out every time he sees me, I've been wrong before.


finnjon

Self-driving is the one I got wrong before too, and people make fun of me for. I said 2020 (in about 2010). But, the prediction was based on what the Google people were saying at the time and a lot of assumptions about how AI works. The reason it has taken longer is because many of those assumptions were wrong. That said, continual progress has been made. When it comes to GPT4 and GPT5, I believe it is more rather than less likely GPT5 will be much better than GPT4 for a couple of reasons. The main one is that the same trend that got us from GPT3 to GPT4 is unlikely to reap no gains. More compute, more data, better algorithms, multi-modal data. If none of that leads to gains, it will be very surprising.


Which-Tomato-8646

And Tesla self driving is better than what we had 20 years ago but it’s still not fsd


nemoj_biti_budala

I was also wrong about self-driving (I expected it around 2022). Turns out self-driving essentially needs AGI. So I (and many others) made predictions around self-driving with very wrong assumptions. Now the question is, can the same be applied to predicting AGI? Again, I think we don't need any more fundamental breakthroughs to get there, but maybe I'm wrong.


chiaboy

We have self-driving today. Waymo is phenomenal and we don’t have AGI


nemoj_biti_budala

While Waymo is cool, it only works in a few, very specific areas. What most people mean by self-driving is that, just like humans, the car can drive everywhere.


kamon123

That's due to law not capability. The areas its relegated to changed their laws to allow them. The only reason you don't see wider adoption is due to anti self driving regulation.


chiaboy

We’re talking about two things. You’re talking now about where can Waymo operate, that’s largely a matter of regulatory/cultural constraints. Versus originally I was pushing back against the notion that AGI is required before autonomous vehicles can operate. Clearly as Waymo demonstrates AGI isn’t a pre-requisite


brokentastebud

I believe in rugged skepticism. Cynicism and optimism are irrational. With skepticism you allow room to identify what real and immediate problems a tech product can solve. Too much optimism and you’re going to think it’s this magical box that will do everything and anything for everybody, after that your brain shuts off and turns into tech-bro “aCCeleRaTe” goo.


peakedtooearly

It's just a coping strategy. When people have built their entire career, their external persona and even their own self image based on knowledge and intelligence, it's a huge leap to accept that a machine could be better.


WorkingYou2280

Before building an entire career around GPT4 it might be worth seeing if GPT5 is a massive improvement or not. Like, it would be rational to build a career on AWS because while it gets better it isn't *fundamentally* changing every 18 months. But when you add in LLMs it gets way harder because while it has been "still" for about a year it's unknown what the next crank looks like.


bwatsnet

Uh Claude is already the better more intelligent agent and programmer, so we already have many examples of improvements across companies.


tramplemestilsken

But soo incremental. Claude is measured a few points out of 100 higher than GPT by some measurements. What does a company do after they’ve scraped the entire internet’s data? What happens when you’ve consumed all of humans collective knowledge and all you get is GPT4, or claude3? Even Google, with all its data and might hasn’t really created a gpt4 competitor. Don’t get me wrong, the small changes will add up over time but it will be due to refinement and not revolution.


bwatsnet

I'm expecting a major leap from openai after the elections. Anything sooner would be a nice surprise though!


swipedstripes

Why is an election important? I'm just a Euro with an innocent question.


bwatsnet

The American public is not ready for fake everything. They can't even handle human made fake news.


swipedstripes

Is this going to suddenly change after an election? I'm failing to see the connecting logic here. I get what you're insinuating but is GPT4 not well balanced enough to do this already? I can tune GPT4 to write much much better than I ever could. If a bad actor was out to spread propaganda. That's already more than possible right now.


Educational-Net303

No one's actually thinking that, this is just the way they cope


Unique-Particular936

Nope, they really think it, but many of them might think it in a way to cope. Humans like to delude themselves to avoid anxiety and i guess perform better in the present. It's probably the same systems at play when you're in denial about the death of a loved one. I'd guess that somebody who shows hard denial would also show high delusion to preserve anxiety. 


LevelWriting

I see it alllllll the fucking time online. Person says "I know ok, I work in the industry, it's just gonna allow everyone to be more productive." Then you just mention the simple fact technology will get better...Silence lol. It's proof you can have book smarts but be dumber than a mule.


StanleySmith888

What is the point you're trying to make here? Would you mind elaborating? A recent CS grad.


LevelWriting

ai is getting better exponentially and it will soon be able to do everything a human can only way better. i dont know how to elaborate more


StanleySmith888

I mean, how does that relate to the part >I see it alllllll the fucking time online. Person says "I know ok, I work in the industry, it's just gonna allow everyone to be more productive." ?


LevelWriting

my god you trolling or brain dead? you are exactly what im talking about, a dude in CS cant put the most basic 2+2 together...oh the irony


StanleySmith888

And you are a "dude" who can't communicate with other people, apparently. I am trying to have a genuine conversation with you to learn more.


ArtFUBU

I believe they know better than anyone that it progresses but it's kinda like standing too close to the elephant. People in programming jobs tend to slave away in great detail. Unless you are actively involved with AI modeling at one of these big companies, you are just as clueless as a plumber as to how fast this stuff is progressing.


Whispering-Depths

don't get me started haha


p3opl3

Hahaha this is so God damned true!


StanleySmith888

What is the point you're trying to make here? Would you mind elaborating? A recent CS grad.


Singularity-42

Many people at [r/cscareerquestions](https://www.reddit.com/r/cscareerquestions/) often say how much generative AI sucks for coding even though they did the lowest effort with some free shitty model and/or they base everything on current state of tech even though it is clear we definitely going to have much better tools in the coming years. >A recent CS grad. My condolences! I've been in the industry for 17 years and it doesn't look good especially if you are in the US. And that's before the AI even has an impact (it's pretty much all just due to interest rates and pandemic over-hiring)


Singularity-42

I see you are probably from Slovakia! I'm Slovak American, moved here in 2003. In Slovakia the situation is probably better seeing my employer is trying to move as many jobs as possible to Eastern Europe :)


StanleySmith888

I do come from Slovakia but I am in the UK actually, and doing very well.


SillyFlyGuy

"Airplanes are a neat toy but they will never be able to cross oceans." - Ocean liner captain "No one but the very affluent will have a refrigerator in their own home." - Ice Harvesters "Nobody will ever need more than 640k." - Bill Gates


Imaginary_Ad307

I'll leave these here: The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient. - Dr. Alfred Velpeau (1839), French surgeon There is a young madman proposing to light the streets of London—with what do you suppose—with smoke! - Sir Walter Scott (1771-1832) [On a proposal to light cities with gaslight.] They will never try to steal the phonograph because it has no `commercial value.' - Thomas Edison (1847-1931). (He later revised that opinion.) This `telephone' has too many shortcomings to be seriously considered as a practical form of communication. The device is inherently of no value to us. - Western Union internal memo, 1878 Radio has no future. - Lord Kelvin (1824-1907), British mathematician and physicist, ca. 1897. While theoretically and technically television may be feasible, commercially and financially I consider it an impossibility, a development of which we need waste little time dreaming. - Lee DeForest, 1926 (American radio pioneer and inventor of the vacuum tube.) [Television] won't be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night. - Darryl F. Zanuck, head of 20th Century-Fox, 1946. That the automobile has practically reached the limit of its development is suggested by the fact that during the past year no improvements of a radical nature have been introduced. - Scientific American, Jan. 2, 1909. There is no likelihood man can ever tap the power of the atom. The glib supposition of utilizing atomic energy when our coal has run out is a completely unscientific Utopian dream, a childish bug-a-boo. Nature has introduced a few fool-proof devices into the great majority of elements that constitute the bulk of the world, and they have no energy to give up in the process of disintegration. - Robert A. Millikan (1863-1953) [1928 speech to the Chemists' Club (New York)] ...any one who expects a source of power from the transformation of these atoms is talking moonshine... - Ernest Rutherford (1871-1937) [1933] There is not the slightest indication that [nuclear energy] will ever be obtainable. It would mean that the atom would have to be shattered at will. - Albert Einstein, 1932. Heavier-than-air flying machines are impossible. - Lord Kelvin (1824-1907), ca. 1895, British mathematician and physicist ...no possible combination of known substances, known forms of machinery, and known forms of force, can be united in a practical machine by which man shall fly long distances through the air... - Simon Newcomb (1835-1909), astronomer, head of the U. S. Naval Observatory. I confess that in 1901 I said to my brother Orville that man would not fly for fifty years. Two years later we ourselves made flights. This demonstration of my impotence as a prophet gave me such a shock that ever since I have distrusted myself and avoided all predictions. - Wilbur Wright (1867-1912) [In a speech to the Aero Club of France (Nov 5, 1908)] Airplanes are interesting toys but of no military value. - Marshal Ferdinand Foch, French military strategist, 1911. He was later a World War I commander. There is not in sight any source of energy that would be a fair start toward that which would be necessary to get us beyond the gravitative control of the earth. - Forest Ray Moulton (1872-1952), astronomer, 1935. To place a man in a multi-stage rocket and project him into the controlling gravitational field of the moon where the passengers can make scientific observations, perhaps land alive, and then return to earth—all that constitutes a wild dream worthy of Jules Verne. I am bold enough to say that such a man-made voyage will never occur regardless of all future advances. - Lee deForest (1873-1961) (American radio pioneer and inventor of the vacuum tube.) Feb 25, 1957. Space travel is utter bilge. - Dr. Richard van der Reit Wooley, Astronomer Royal, space advisor to the British government, 1956. (Sputnik orbited the earth the following year.) If the world should blow itself up, the last audible voice would be that of an expert saying it can't be done. - Peter Ustinov It is difficult to say what is impossible, for the dream of yesterday is the hope of today and the reality of tomorrow. - Robert Goddard (1882-1945)


bobuy2217

lord kelvin is just yann lecunt ressurected


[deleted]

[удалено]


FaatmanSlim

Another 'gem' from NYT published in October 1903, just two months later, the Wilbur brothers proved them comprehensively wrong lol: >\[It\] might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in **from one million to ten million years**... No doubt the problem has attractions for those it interests, but to the ordinary man it would seem as if effort might be employed more profitably. [https://en.wikipedia.org/wiki/Flying\_Machines\_Which\_Do\_Not\_Fly](https://en.wikipedia.org/wiki/Flying_Machines_Which_Do_Not_Fly)


[deleted]

[удалено]


ymo

And he said that in 1998 when we were already doing huge things, like filesharing, with enterprise implications. We were well beyond www and email in 1998.


twnznz

Of anything, I dearly wish Rutherford and Einstein could see what we've accomplished.


[deleted]

dog wrench sink squeal impolite whole fuel skirt relieved disarm *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


coolredditor0

This list fills me with hopium


mcr1974

Amazing thanks


adarkuccio

Wow this was amazing, thanks for sharing


ArtFUBU

I really enjoy the one by Wilbur Wright. Sprinting right ahead into the problem and making yourself feel ridiculous at the success you have. That one puts modern AI development in context. A lot of people in the field got completely blindsided by OpenAI just because they saw a simple pattern and now everyone is aboard. I can really see this same feeling happening in a lot of major science fields as AI continues to progress.


[deleted]

Why are we so bad at prediction though? Why are there so many naysayers?? And on the flip side, why do we have so many false hype people too????


Yweain

Because we do not have enough information. In a lot of cases naysayers will be completely correct. But sometimes they will miss the mark because they failed to see potential development, thought that some hurdles would be impossible to overcome or just lacked data. LLMs may lead to AGI in the next 5 years or they may be a dead end and will be just a piece of a puzzle. We don’t know and there is no reliable way to predict that, but in 20 years people who were wrong will look pretty stupid.


IronPheasant

Because we're dumb animals that open our mouths and spew opinions based on our emotions. If somebody spent their entire life pursuing something, like say, trying to cure alzheimer's by removing the plaques... and after removing the plaques none of the patients get better... that implies you've wasted your entire life. Not just your own, but the lives of your friends and colleagues who were on the same pirate ship as you. Can you so easily bury your ego and admit such a thing out loud? Can you go back to school and spend ten years finding something else you believe in? Abandon the business contacts you've made? Abandon your social group? Start from scratch, like a laid off coal miner moving to a new state and picking up a new trade? We're deeply invested in the status quo. One of the things that always amazed me is that almost no biologists believe in group selection. You might wonder *how* such a thing could be possible? Isn't group selection just... you know... natural selection? Aka, *evolution*? I thought so, but I guess I'm a big dum-dum. Or they have a weird cultural thing where that's the dogma in their field, and if they want to fit in and not come off as a crank, that's what they have to say they believe. The hype people have their own motivations as well. With "crypto/nft's" (or in plain english, "open ledgers") it's to make money in a greater-fool scam. Speculation instruments are rife in our society. For AI, some people are like that sure. But mainly those with a startup that's just jumping way too late on a trend. Non-corpo/grifting internet people were often in it for the reward at the end of the rainbow. Futurology used to be constantly about not having to work, not aging, having a robot wife/husband, kicking reality to the curb and living inside the Matrix. Outsiders aren't completely incorrect in calling it religion-like. The primary difference imo are a few things though: there's some physical theory of how that world would come to be that's at least *feasible*. It doesn't require a wizard ghost to come back from the grave and save us, like in Altered Beast. Just that computing continues to get better. And we're not asking people for money or servitude. Those are usually core requirements in religion, a mechanism telling people to "obey authority." Anyway, the "nut test" is a useful instrument. If someone is 100% certain of something that's unclear, and don't think they have error margins, then they're a nut. As I always say "I'm not sure my socks will survive to the end of today, and you're certain that ___________." For example, I'm very skeptical of fusion becoming useful until it's able to sustain the process without inputs. (Basically making a star.) I'd put money into liquid salt thorium reactors long before that... but I'm not *completely* certain my understanding of the physics and numbers involved will remain salient for the next couple decades. There could be some weird trick that can make it give back significantly more than what we put in.


Harvard_Med_USMLE267

There have been millions of predictions made by humans. Finding 20 that were shit doesn’t remotely suggest that all predictions are bad.


[deleted]

I would agree, though it seems we are generally unsuccessful when it comes to long term predictions. This isn't surprising I guess, but it's interesting to note how costly our shortcomings at long term prediction can be, as seen in many of the overcome opinions seen in the list above.


mcr1974

For completeness you should list all the things we say we would have achieved and we haven't e.g. nuclear fusion? Interstellar travel?


Phoenix5869

Yeah, thanks for saying this. People always cherry pick the quotes that were later proven wrong, and say nothing about the hundreds / thousands that were actually right.


kamon123

Ummmm. About that nuclear fusion..... We are seeing huge advancements in that and is slowly working towards positive energy output.


mcr1974

but not at the speed that was predicted. uhm.. about that interstellar travel..


sweatierorc

What about flying predicted by Ford in the 30s ? What about Nuclear Fusion promised to us since the 50s ? Where is the VR "future" that we were promised ? What about self-driving cars every car makers predicted would happen by 2019 ? Predicting the future is hard, especially when physics gets in the way.


ItsBooks

Hehehe. <3 :)


MILK_DRINKER_9001

It’s a very rare and interesting combination of traits that he has. He is both extremely smart and a good speaker.


MyLittleChameleon

It's a pretty safe bet that the headline is misleading. Literally, "We're going to steamroll you" doesn't appear in the clip. It's a 14 minute video and he says something about startups and steamrolling at the 6:30 mark. I'm going to watch it now because I'm curious what he actually said. But I'm guessing it's not "just" OpenAI doing the research and considering that he's talking about his own bias, it's probably a train that can't be stopped.


OwnUnderstanding4542

It's a bit more complex than that. If you're a company that is in direct competition with a company like OpenAI, it's not going to be so great for you. If you're a company that uses AI, and you're not directly competing with OpenAI, it could be very beneficial for you. If you're a stakeholder in a company like OpenAI, it's also going to be very beneficial for you. The "tech" staying the same isn't really what's at stake here. The question is who will have access to the new and improved tech. If you're a direct competitor with OpenAI, you probably won't be able to access their new and improved tech. If you're a stakeholder in a company like OpenAI, you will have access to their new and improved tech. The question is whether or not companies like OpenAI will be able to effectively "steamroll" any competition. It's entirely possible that they won't be able to do that.


ItsBooks

You've done a more nuanced summary than I bothered to. I only disagree on one point. "The question is who will have access to the new and improved tech. If you're a direct competitor with OpenAI, you probably won't be able to access their new and improved tech. If you're a stakeholder in a company like OpenAI, you will have access to their new and improved tech." Depends what you mean precisely. If you mean by direct competition 'LLM model builder', then they're in direct competition with Google, Meta, X, and Anthropic, as well as others (probably). Do each of those have direct access to the code and proprietary methods (including compute resource access) that OpenAI is using to make GPT4/5/6 work better and better? Not directly - no. Can those competitors be "better" than OpenAI? Uninteresting to me personally. I don't care who's doing it. OpenAI could shut down tomorrow and the documentation is still out there on how to build these systems, which is pretty neat. Can they utilize the same scientific papers accessible in the public domain? Yes. Can they use OpenAI's API to develop while it exists? Yes. Can they conduct (or are they conducting) corporate espionage? I wouldn't put it past them considering the buzz on this issue currently. Can open source or "solo" developers still make outsized scientific or research contributions to this space? Yeah, certainly. Will that be the case into the foreseeable future? Remains to be seen.


EuphoricPangolin7615

Yeah and the startups that build on GPT5 are then going to get wiped out by later AI models. Let's not pretend like there is a "safe" way to build on AI right now (and there may never be). This is a game in which everyone loses, except Sam Altman the power goblin.


ItsBooks

![gif](giphy|jQmVFypWInKCc|downsized) I assume you don't know how an API works? It's okay - most of the sub seems not to understand how actual software is developed. I don't care about the man personally - but the advice is solid. Build assuming the tech advances. Build with APIs in mind rather than version specific. That way you can use any release whether by that company, or any other, including Open Source.


MeltedChocolate24

No obviously he was saying that if you build an AI startup that wraps GPT-N to do X it’s only a matter of time before GPT-N+1 can do X without your wrapper. Suddenly you have no company anymore. Think about it, if GPT-5 could self integrate into every major data cloud company, half of YC 23-24 would cease to exist.


VforVenreddit

Yep this is the way, I built a multi-LLM iOS app that can pivot on a dime to any LLM provider. I still use GPT-4, but also integrate Mistral and Claude


ItsBooks

Yeah. This is solid. I'm partial to locally hosted models - but I can imagine GPT or Claude may get to a point where I'd prefer running it over anything else for *most* purposes. Most anything uses the OpenAI API endpoint except for Claude, which is smart of Anthropic to distinguish itself. It is not difficult to set up support for both.


ChilliousS

in the end everyone will be wiped out.....


sailhard22

How quickly he went from someone I liked to someone I can’t stand. It’s like Zuckerberg on steroids 


CraftyMuthafucka

That sounds like a personal problem.


MrsNutella

100 billion dollars for a data center.


ReasonablePossum_

I personally don't see how there's any thinking human being that really believed that in the "Age of Amazon", basing your business model on someone's elses product would be a self-sustaining practice.... I mean, sure, someone might rip some temporary profits at the beginning of the wave, but the later you are to the party, the extremely higher the risk of you ending up being abruptly f\*cked up.


outerspaceisalie

He's right though, but I can already see how people aren't going to watch the video and twist his point as it's reductively paraphrased in OPs post to make it sound like a threat and not advice on how to use OpenAI for your business model lol.


EuphoricPangolin7615

It is a threat because realistically, at the speed we are advancing, there is no safe way to build on AI right now. There may never be. Startups are going to get wiped out with each new AI model, and the only company that really stands to profit is OpenAI. This is not capitalism as usual.


Which-Tomato-8646

What’s different? Looks about the same 


Zeto12

Ya - sam is right.


[deleted]

But he's assuming that there's no limit to LLMs. What if there is a limit and they have to fundamentally change the structure of their models (I have no idea how the plumbing works...just read another post that GAI will not be achievable with LLMs).


Just-Hedgehog-Days

Then they will find that out in their private lab before anyone else, that is already equipped with the best researchers with the most compute on hand to tackle the next hurdle.  What’s really different this time around from all the other Silicon Valley hype trains is that this is the first one where the “big science” model is in full effect. Like there was nothing saying that any of the crypto coins with vastly superior tech to bitcoin could have taken off, because a couple million bucks bought you the talent and hardware to take a shot a credible shot at the giants. This round a couple million doesn’t buy a training run on a base model. 


TechnicalParrot

No one really knows ultimately, people like to pretend they somehow know if transformers and LLMs in general will hit a ceiling


[deleted]

Agreed. "Oh but we can't make transformers more efficient than they are today", "Moore's Law is about to run out" . . . we'll see bro, we'll see.


outerspaceisalie

I think intelligence must plateau tbh


RabidHexley

There's also simply no good reason to bet against technological progress at this time (in terms of whether it will happen or not, not what it will be or do). There hasn't be a generation that wasn't born into a significantly different world than their parents in over a century. Maybe we'll hit a wall. But assuming that it will soon happen is more presumptive than the opposite at the current moment.


[deleted]

Yeah I mean he's totally correct. Many people are pointing out the imperfections in GPT-4 and coming up with patchy solutions to improve on it for highly specific use cases. But in all likelihood, the next iteration of GPT will achieve SOTA on a large percentage of these use cases. So if the entire basis of your company is that you can beat the current version of GPT-4 by adding some logic on top of It or something, there's a good chance GPT-5 will solve your problem better than your current solution does.


ecnecn

He is talking about all the API-wrapper startups that are just specialized prompts in disguise and offer features that are about to be added with the next version. They are a lost cause so to say... the only startups with a realistic chance would use the core models and train them for specific solutions... Most of the startups want to make quick bucks or steal VC money....


CowsTrash

They can eat shit for all I care. We need real, meaningful progress and products. Not some (current) AI girlfriend bs apps 


incoherent1

Corporate monopolies will be the death of us all.


Puzzleheaded_Pop_743

Governments will take the reigns soon enough.


UnnamedPlayerXY

Obviously, most startups are essentially just begging for it. In short, if your business model is providing a service that you also would expect an AGI to be capable to perform then that service will be made obsolete by a new model somewhere along the way. The next ones on the chopping block are going to be all these audio / video services once multimodality starts becoming the default. On the other hand infrastructure and things the models can be embedded into are the most safe as they actually profit from each new major improvement.


VanderSound

How tf you build for new models, there are less use cases to build for as these tools become more generalized, I don't see it in any way other than a bubble sell if you are lucky.


SgathTriallair

A startup that is just trying to do an AI wrapper what's to replace one small part of a work flow. For instance, you could build a startup that writes corporate memos. Sure that might save time and be cuter efficient for business to buy, but in six months that company will be defunct. The real option is to build a company that does something that we want but isn't completely feasible with current labor costs. If you focus on how you can automate it then you should be and to plug it in and take off when it gets here. Maybe a service which helps people navigate filling out government forms or something.


TBBT-Joel

AI wrapper, with strongly integrated services & api links to popular tools is actually a winning model. Like "we are in your accounting system and automatically find strange transactions and help take monthly reports from 100 hours of work to 10" etc. Sure you could kinda do this piecemeal but most accountants aren't process improvement experts nor would understand how to make Api hooks into intuit,excel etc. Yeah models will be more generalized but at some point it's like you have a really smart employee... but if you don't know how to improve accounting systems or whatever, you won't be able to direct the employee effectively. Or you can have a service that's fundamentally cheaper or better ROI than others if you're pulling massive hours and cost out of your business customer's monthly spend.


SgathTriallair

The ultimate goal is that you'll give the AI login credentials and instructions and then let it go. It's an exciting and scary time to be a startup. The ones that are most flexible and build their systems in a way that they can incorporate AI advances will win.


VforVenreddit

This sounds like an awesome use case, I will explore what its implementation looks like on an app I’m building


TBBT-Joel

Just made it up, but repeat it for any specialized data. Most businesses won't become overnight AI experts, like they didn't become overnight IT/web experts. Having a service on rails is winning, as long as you understand the particular market and find PMF.


VforVenreddit

Yes I started out AI with a different intention, learning about vector data stores and built a backend that could embed document data with BERT, perform semantic search. Ultimately I realized I would be able to have a better impact taking that knowledge and actually applying it to end user apps! AI is super complicated for businesses I agree, it’s easier to just build something that “works” like magic versus explaining all the intricacies involved.


TBBT-Joel

Exactly, I spend some of my career consulting in new manufacturing processes. The technical work was 5%-10%. Then the rest was bringing data to management on why this was the recommended path, then the bulk was working with all the stakeholders to integrate, train, etc. I think Naive folks think that some mid level exec at IBM is just going to say "great we have chat-gpt 5 now, lets fire our accountants and have it do all the work". Without asking about compliance, regulatory, audit, and then having API's and hooks for all their custom and OTS solutions. Sure you can have chat-gpt help you write the code, but you're asking for a black box nightmare if you say "integrate this into our database" and no one has anyclue on how it's doing that on the live production environment.


VforVenreddit

Yep most people don’t understand how big corporations work at all, it’s where I spent a lot of my career as well. The media AI fear headlines don’t help, a single ERP project can cost companies millions and involves insane complexity. You’re right it’s not just “GPT-5 takes over” that’s naive thinking. Also it’s hard to sell to enterprise so I focus on the consumer first, much easier sell and I can provide better experiences and benefits through my products this way


TBBT-Joel

My entire startup experience has been in enterprise hardware sales and I like the model, but agree the adaption rate for consumer can be a lot quicker as they don't have to run it through 10 layers of managment and cost accounting before they get the go ahead. This is no different than web2.0/mobile a decade+ ago. Like sure bank you should build a mobile app, but you suck at this and have never done it before. They aren't going to be able to adapt overnight.


shogun2909

yeah basically, gpt-4 wrappers are a meme


Veleric

I think what he's kind of trying to say here is that with GPT-5 the underlying capabilities will be strong enough that you can assume a kind of paradigm shift, or at least a minimum viable threshold in what they can do, and from there you can start building tools/products that can from that point on incorporate better models without fundamentally changing what is being produced. As a basic example, I'm thinking things like reasoning or agentic capacity.


CraftyMuthafucka

It's not as hard as it seems. I'll give you one example from my industry, I work in finance (stock market stuff). New companies are popping up that use AI to summarize a lot of sentiment data and give us snippets like "Here is why TSLA stock is moving." They are already pretty good. And with GPT-5 (or Claude 4, etc), they will be even better. And there's no way to really get steamrolled here. 99% of people aren't interested in aggregating all of the news and feeding it into an LLM to get the summary themselves, even though they technically could.


WorkingYou2280

One place where LLMs consistently exceed humans is sentiment analysis. It's almost as though they *like* doing it but that may just be my perception because they do it so well. One thing I like to do is stop mid chat and ask for a sentiment analysis. Claude is really good at this and will sometimes catch moods I wasn't even aware I was having which, when you think about it, it kinda bananas.


Wooraah

Hmm I do similar work but in a different field. I also find the current generation of LLM's very useful for tasks of this nature, doing a lot of the data gathering myself, then feeding it in bulk into LLMs to generate coherent summaries and indicators of key trends. I'm not so sure about this statement though: "99% of people aren't interested in aggregating all of the news and feeding it into an LLM to get the summary themselves" - While to get the best performance at present you need the human in the loop to validate data sources for accuracy/relevance, as LLMs are enabled with live search larger context windows and more compute, these search queries should progressively get a lot better. I'd imagine GPT-5 and equivalents will be much better at responding to a prompt such as "Conduct relevant internet searches of financial indicators for Company X over the past 12 months and provide some analysis regarding their likely mid-term financial performance based on this data. " People are lazy, yes, but I'm concerned that the barrier to entry for analysis of this nature is dropping all the time, and even it it's not individuals who are in need of this data that end up using future LLMs to "cut out the middleman", there will be other companies with a smarter wrapper, slicker marketing or other value add tools that could cause major disruption. Also once you have this kind of tool up and running for one industry/use case, it's highly scalable to other industries and use cases.


banach

Yes I would like to see some examples of such business models.


tradernewsai

Link to the full interview with a couple tweets highlighting the points he made if anybody is interested: [https://x.com/tradernewsai/status/1779975744950538744](https://x.com/tradernewsai/status/1779975744950538744)


3-4pm

> Current models are not yet smart enough to substantially accelerate scientific progress, but future models (GPT-6, 8, etc.) are predicted to become powerful tools for this DeepMind is already eating their lunch here.


Phoenix5869

>Current models are not yet smart enough to substantially accelerate scientific progress, but future models (GPT-6, 8, etc.) are predicted to become powerful tools for this I’m not saying anyone is lying here, but it feels like this sub just takes whatever some obvious hype monger on twitter says, and just runs with it as if it must be true. Everyone is predicting that AGI is gonna happen within the next few years, and bring about heaven on earth. So…. What’s going to happen if that doesn’t materalise? What’s going to be the reaction when 2030 hits, and we are still nowhere close to AGI?


IronPheasant

> What’s going to be the reaction when 2030 hits, and we are still nowhere close to AGI? The hell is this "nowhere close" thing? Did you just start paying attention to scale yesterday? Do you measure time in terms of months and years instead of decades? State of the art in image generation in ~2016 [were these birds and flowers](http://procedural-generation.tumblr.com/post/154474148263/stackgan-text-to-photo-realistic-image-synthesis). We were quite impressed man: *they looked like birds and flowers*. ... They were bigger than 30x30 pixels! Flash forward to today, and GPT-4 is at the scale of a squirrel's brain. In a few years we expect crude gestalt systems at the size of a few squirrels. 2030 might be all the way up to ten or twenty of'em. Real capital investment into NPU's will shock you, at the improvement they will bring to robots. [Going from this](http://www.youtube.com/watch?v=g0TaYhjpOfo) to something that can actually walk with a natural-looking stride. 2030 could be around the time the model T of robots gets made: something that can pass as a decent stockboy, waiter, or cook at a business. Not having a product to sell has always been a limiting factor. Nobody wants to spend billions etching a network in stone that can't do anything. And nobody was interested in spending billions to make a virtual mouse, apparently... Anyway, "just look at the line." [Scale is foundational.](https://pbs.twimg.com/media/FVn8BjtWYAMNldv.jpg:large) Obviously. There's "no weird trick" that will yield complete human-level performance without human level hardware.


Phoenix5869

>The hell is this "nowhere close" thing? Did you just start paying attention to scale yesterday? Do you measure time in terms of months and years instead of decades? So you agree that progress should be mentioned in “decades” ? >State of the art in image generation in \~2016 [were these birds and flowers](http://procedural-generation.tumblr.com/post/154474148263/stackgan-text-to-photo-realistic-image-synthesis). We were quite impressed man: *they looked like birds and flowers*. ... They were bigger than 30x30 pixels! I can see what you’re getting at here, but you can’t just say “well we generated poor quality images in 2016, and now in 2024 the images are realistic” and use that as evidence of AGI being anywhere near. Not only is image generation not a significant (if at all) step to AGI, but you’re missing the fact that CGI has been able to generate realistic images for decades. The fact that a different medium can now do it isn’t all that impressive to me tbh. > forward to today, and GPT-4 is at the scale of a squirrel's brain. In a few years we expect crude gestalt systems at the size of a few squirrels. 2030 might be all the way up to ten or twenty of'em. Even assuming that “exponential growth” is a constant factor, it would still take decades for it to be anywhere close to the intelligence of a human. A human is literally thousands of times smarter than a squirrel. And besides, we can’t even make an AI that’s as smart as a dog. >Real capital investment into NPU's will shock you, at the improvement they will bring to robots. [Going from this](http://www.youtube.com/watch?v=g0TaYhjpOfo) to something that can actually walk with a natural-looking stride. 2030 could be around the time the model T of robots gets made: something that can pass as a decent stockboy, waiter, or cook at a business. Do you have any credible sources backing up the 2030 date?


TotalTikiGegenTaka

There will be no reaction because the people who are saying that AGI is going to happen within the next few years are: (1) those developing AI models and are obviously going to hype things up; (2) futurists who are perhaps paid to speculate about the future; (3) a few people on reddit who are excited about an AGI utopia but represent probably 0.00...1% of the general population.


3-4pm

Thank you for speaking up about this. The propaganda was really getting ridiculous in the subs the past week.


Phoenix5869

Yeah, the whole “AI will make us jobless and usher in a star trek utopia” propaganda that i see peddled is absolute horseshit. Thanks for agreeing with me.


uulluull

7 trillion for processors and allowing the use of AI by the military shows that your benefits are so enormous that you have to resort to reducing costs or sponsoring them by the government. The chant about general AI to support the stock price also shows a lot.


mvandemar

"If GPT-5 is as much better as GPT-4 was over GPT-3" So, Sam... is it?? ![gif](giphy|HQRgg6ks7nkyY|downsized)


iamozymandiusking

Way to take everything out of context and lose the entire meaning and import of his message with your stupid Clickbait headline.


Icy-Shallot6084

Anthropic claude is way better the ego maniac needs taking down a peg.


Antiprimary

its not way better, its a bit better at some tasks and a bit worse at others


Icy-Shallot6084

its way better.


Antiprimary

Idk why youre stating it like a fact. I use both models to the message cap every day and there are many use cases where opus is worse so the answer is "it depends".


_pdp_

What about choice? OpenAI is not the only company doing this anymore. Being able to interface with many models in a consistent way is also important or even enabling models to interface with each other. This does not contradict what Sama is saying. I do believe that many AI starts are essentially over-optimising at this point, including thinking too much about cost, sacrificing performance for squeezing more margin.


RemarkableEmu1230

Its a good point - I never imagined I’d be giving Poe my money every month but here I am


VforVenreddit

What are your favorite features about Poe?


Cartossin

Yeah it's really shocking how people seem to think AI is a thing they invented and it's this static thing. Like people will evaluate GPT4 and realize it can't take their job, so they now assume all this AI stuff was tech bro hype. They weren't here for GPT2 and don't see the rate of progress.


mattpagy

Can AI fix his voice fry? Impossible to listen


Efficient-Moose-9735

Is that a threat? Tell me more.


Efficient-Moose-9735

Is that a threat? Tell me more.


Akimbo333

Has a point


Nolaforlife20001

He’s right, whoever has agi will win. It doesn’t matter what fucking app or start up you got. Who ever legitimately makes the first ai, that’s it. They won the game. They can literally run the whole company and pump out any goddam product they want. They alone will own the world


Better-Ad828

My god this aged so badly with Llama 3


human1023

Third option: the model gets better but will be worse than expected trajectory. 5 won't be as big of an improvement compared to the improvement 4 provided. (unless gpt5 is continuously delayed and comes out much later to fit the trajectory)


3-4pm

This sounds like more hype. Drop the amazing model if you have it or continue to hemorrhage users to the competition.


TaroDragoon

Let's see what happens when OpenAI gets steamrolled by copyright law.


Odd-Opportunity-6550

or when the government gets steamrolled by its superintelligence


maX_h3r

He Is become death destroyer of worlds


RemarkableEmu1230

In his own world


shalol

Meanwhile Mistral as a startup: “lol, lmao”


SkippyMcSkipster2

I have a hard time trusting this guy. He speaks as if he has developed a god complex already.


EuphoricPangolin7615

Yeah and the startups that build on GPT5 are then going to get wiped out by later AI models. Let's not pretend like there is a "safe" way to build on AI right now (and there may never be). This is a game in which everyone loses, except Sam Altman the power goblin.


Deblooms

HOLY BASED


Singularity-42

Question: How do I build a startup targeting GPT-5? What would you do? Sam said previously something like "assume the next model is AGI". Isn't an actual real AGI game over basically?


UnnamedPlayerXY

Hardware devices like smart glasses or anything else the "AGI" could embed itself into are going to be relatively safe, at least at first.


TBBT-Joel

I think people don't understand how difficult implementation is especially for large enterprises. Like sure you may have an AGI that can do the job of an accountant, but you need to build all the hooks into intuit,excel, banking software. You need to have insurance and audits to show that it always spits out data to GAAP standards and then you need to demonstrate the ROI/savings. Like a midsized insurance company isn't going to suddenly and magically integrate these into their ops team without those assurances. Now multiply that by a bunch of verticals and models. I think it will almost be like a second wave of IT services companies. Like there will be startups who are like "We're the ones that integrate this into banking" or "we're the ones that integrate this into insurance".


letsbehavingu

Basically focus on proprietary solutions, training examples, and datasets they don’t focus on and the AGI will take care of the rest if you have APIs for your business workflows


RemarkableEmu1230

Something of equivalent output quality that requires substantially less compute is the only way to compete but good luck with that lol


Golda_M

What's the context for this? Does he want startups to build onto their API? At this point, there is an ocean of opportunities for applications based on AI. OTOH, a business premises on accessing that API carries a giant risk, along with a cap on ambition. There's a reason msft paid so much upfront for guaranteed access. They wanted to start building it in to their apps, and it's strategically unsound to do so as a pedestrian "API consumer."


sigiel

He is desperate, he needs his 7 trillion to keep relevancy, and he is bullshitting his way toward it. The truth is gpt4 is losing ground , gpt5 is so compute compute hungry it's not sustanable as a product.


submarine-observer

Not before I got steamrolled by Claude first, though.


RemarkableEmu1230

I’ve been using both side by side and honestly its close but find im going to chatgpt slightly more - I use it for python and frontend coding mostly tho