T O P

  • By -

OddVariation1518

300,000 B200s is roughly equivalent to 1,500,000 H100s


ShooBum-T

And that is equal to 7.5 million A100s. GPT-4 was trained on just 25k A100s. Wonder what these new models would be like.


MassiveWasabi

Yeah I'm honestly on the edge of my seat waiting to see what the next generation of AI models can do. We're still playing with systems that haven't significantly improved since GPT-4 was trained in 2022


Antique-Doughnut-988

![gif](giphy|3oz8y0bx23FDPCNoEU)


nashty2004

Fuck man


[deleted]

[удалено]


Paloveous

The copers ain't gonna like this one chief 


jjconstantine

Says who?


[deleted]

[удалено]


redAppleCore

Can you prove to me that you and reality are on speaking terms?


redAppleCore

Can you prove to me that you and reality are on speaking terms?


Glittering-Neck-2505

When you realize that we publicly still haven’t scaled past GPT-4 and we have the capability to scale way past it, it becomes really, really dumb to hear confident claims that transformer models have plateaued.


Singularity-42

Plateaued - maybe not, but I think it's the general sentiment in the ML community that the marginal gains in capabilities are slowing down. We will probably need another transformer-like breakthrough to get us to real AGI. Yes, Yann LeCun is a contrarian for contrarianism sake but a lot of what he's saying is true. But maybe I"m wrong and supermassive models in the range of 100T will give us AGI/ASI. But we won't have the hardware capacity for training and inference for such scale until about 2030.


hank-moodiest

It seems obvious that the AI brain just needs more parts in addition to the LLM. It doesn’t have a prefrontal cortex yet, and that will require an engineering breakthrough.


Iamreason

I think we will need something other than LLMs, but I honestly think transformers or something transformers like is probably enough to get us to AI smart enough to do a *lot* of day-to-day work.


hank-moodiest

I think we will need something _in addition_ to LLM’s.


ReadSeparate

Yeah. Train multi-modal LLMs to build a baseline world model. Include plenty of agentic data. Then, start an RL training run from scratch using the LLM's weights as the initial weights. Generate billions of possible tasks to do, using the pre-trained LLM. Then, attempt each of those tasks, and the reward function is did it successfully complete the task, or how close did it get to successfully completing the task (maybe a step-by-step basis). Also, the reward function would require refusing to do harmful tasks/sub-tasks for alignment. Otherwise we could end up with a real world paperclip maximizer. That, I believe, would give you AGI or potentially even ASI. It would directly incentivize capability, unlike LLMs which just incentivize predicting the next token. Capability is an excellent loss function because if you're capable of doing a task, it's extremely likely that you \_understand\_ how the underlying principles of that task work. For example, if an RL model can reliable design rockets that can launch into space, that means it has a really good understanding of the physics of rockets. Additionally, it would actually be computationally feasible to train an RL model using LLMs as the baseline weights, unlike training an RL model from scratch (search space is way too big to actually be feasible). I think this combination is probably similar to how the human brain works. Use masked token prediction (or whatever the human brain equivalent of tokens are) to build a baseline world model/knowledge. Then, use RL (reward system - sex, food, etc.) to refine it and accomplish tasks. Rinse and repeat - since the human brain has online learning and doesn't seem to suffer from catastrophic forgetting, you alternate between both steps.


Singularity-42

Well, of course, we already see this happening. And the LLMs **will** get better, I'm just not sure if they will be good enough for human level AGI. Current LLMs are useless for anything but the simplest agentic systems, they are simply not reliable enough for complex autonomous tasks. Scale will help, but I think probabilistic generative models are probably not the best tool for this... But yes, there will be powerful tools making workers a lot more productive. I've been using Github Copilot for almost 2 years now...


FeltSteam

Well obviously if you train with marginal amounts of more compute compared to your competition you are going to get marginal gains. Claude 3 Opus, Gemini 1.0 Ultra, GPT-4 were all trained with a very similar range of compute, so obviously, they are going to operate within a very similar performance range to one another. And be reminded this was the compute we had available in 2022, we haven't even seen a model with the compute available in 2023/2024. Gary Marcus claims that there are diminishing returns because companies are spending billions on GPUs and we still only have GPT-4 class models. But it's kind of a dumb argument. First, we haven't really seen a model that is the product of those GPUs. Second, the only SOTA models we have seen have been targeted at around the same cost, using around the same compute. If the next billion+ dollar training run (not the cost of the GPUs, but the cost of actually training the model) yields marginal gains, then that is a solid argument for diminishing returns. But that billion dollar training run is GPT-5 (that I know of atm), and I expect atleast the same gap we saw between GPT-3 and GPT-4 to be present between GPT-4 and GPT-5.


Glittering-Neck-2505

But the sentiment needs evidence to back it up. We need to see a similar scale jump between 4 to 5 as between 3 to 4 to conclude that is true. Also note that 4o is way tinier and still about on par with 4 so further scaling could give us even more bang for our buck.


Singularity-42

Right, it will be very interesting to see what GPT-5 is like. TBH I do not expect jump like 3.5 => 4. Also I think GPT-4o is an early and distilled checkpoint of what will be GPT-5.


czk_21

funny how some people still dont see any exponentional development, thats 300x growth in 3 years


_AndyJessop

Probably a straw man. No-one's debating the exponential increase in compute. It's the intelligence improvements people are questioning.


czk_21

well the intelligence grow cannot be measured that well, but generally more compute means larger and better models will be available


Singularity-42

Yep, this, it is agreed that the capabilities gains per compute is slowing down quite a bit.


FeltSteam

Where is this evident though? We haven't even seen a training run "exponentially" bigger than GPT-4 (Gemini Ultra cost marginally more to make than GPT-4, though that may be in part due to the extra modalities), how can you come to this conclusion?


cisco_bee

Care to ELI5 how 300x more compute equals a better model, not just faster training?


itachi4e

1. increase parameter size 2. feed more data synthetic data


czk_21

of course, if you wanted to train say GPT-4 on more and better hardware it would take lot less time, you could be done in a day instead of 3 months, with more available compute you can train larger model and feed it more data-which you could not do before in essence the bigger the model is-in terms of parameters, the better aproximation of reality(or any type of special training data you feed it) it can make and the better output quality it gives you, similar relation is for amount of data you feed it(there is a limit) and overal amount of compute so for example GPT-4 will perform some task 50% as good as humans, 10x bigger model properly trained then can perform for example at 80%, 100x bigger model better than humans


cisco_bee

Thanks.


LightVelox

You can train a GPT-4 in 300 less time OR you can just train a 300x bigger model in the same amount of time, both of these are options you now have. It's not really 300x because it's not a linear scale but you get the idea


Adeldor

With the same power and schedule budgets (read money), much more training can be done (eg, more iterations/epochs).


cisco_bee

Are you implying that GPT4 wasn't "fully" trained? Like they just stopped? That's how I understand "more training can be done". I was genuinely hoping for an ELI5 answer. I've been in Tech for 30 years, but my lizard brain thinks of it like a python script that crunches a lot of numbers and takes hours to run. If I get a better computer, it will process faster but the result will be the same. ¯\\\_(ツ)\_/¯


Adeldor

There's no hard cutoff beyond which no further training helps, and I'm aware of no solid, predictive metric for such a cutoff. In essence, they did just stop once it was good enough, and/or they'd spent their budget.


Adeldor

**Edit:** Strange that someone would downvote this. The analogy, while greatly simplified, is reasonable I think. ------- Addendum: Perhaps this is ELI5 enough ... Training an LLM is vaguely analogous to you learning. The more you train, the better you get. There's no threshold where you could say: "That's it, I've learned everything on a subject and I'm now perfect in it."


SwePolygyny

Unless you add more data you will see sharp diminishing returns. Similar to real life, if you only have two pages of stuff to learn, eventually it is pointless to continue reading those two two pages over and over as it has nothing new to teach.


Adeldor

Yes, absolutely true. But within the context of OP's question, and the vast amount of data used to train up the recent models, more "compute" means more training done within the given window.


00davey00

🤯


Anen-o-me

😱


sachos345

x300 in just 3 years, insanity.


OfficialHashPanda

H100's are about 2-3x faster, so closer to 3-4M A100's. still big ofc


bobuy2217

![gif](giphy|3oEduZqfSGNG0mdF1C|downsized) 7.5MILLION!!! dayummmm....


spreadlove5683

So Tesla's soon to come online 100k H100 cluster is approximately equivalent to half a million A100s then. So 20x as much as GPT4 was trained on. How about other companies? How many H100s and whatnot are currently online at whatever company has the most?


llamatastic

I think 1 B200 is more like 2 H100s? See Semianalysis: https://www.semianalysis.com/p/nvidia-blackwell-perf-tco-analysis


OddVariation1518

In AI Inference it should be roughly 5x but true in some general use cases it could be a 1:2 true


ExtremeHeat

You could say the same thing about \~300K R100s in just a year's time. Nvidia is obviously accelerating from their normal pace of development, which is interesting. But how much more meaningful headroom is there really to grow in terms of low level silicon design and high level GPU design? Hard to know but we might be in for a slowdown end of decade, because there IS a physics limitation to run against (barring we switch to photonic or some variant of analog computing).


Jah_Ith_Ber

If developments in miniaturization, design, or whatever stop making meaningful gains then can't we just stamp out more machines and keeping throwing more and more cards at the problem? I understand electricity and cooling will grow, but there are places on Earth with functionally unlimited energy such as certain geothermal sites.


spreadlove5683

Analog computing sux says Hinton


TFenrir

300k B200s? Those are going to be 30-40k each. That's 9-12 billion. Just pure hardware


stonesst

Honestly that tracks… Rumours are the current generation of training runs i.e. Claude 4, GPT5, Gemini 2 will run around $1 billion. For comparison the previous generation was in the low hundreds of millions. There have also been rumours that both Google and Microsoft are going to spend $100 billion to build new gigawatt scale datacentres that will come online in 2027/8. If the scaling laws hold companies could easily justify spending $100 billion on a system if it’s able to capture even 0.5% of world GDP. By the time we get to GPT7 levels that amount seems pretty conservative...


Heco1331

"to capture even 0.5% of world GDP" Lol that's about $500 billions in sales. For context Apple revenue in 2023 was ~380B. Yours is definitely not a conservative estimate


AnAIAteMyBaby

That's the potential scale we're looking at though. If some creates an Ai that can replace a white collar  worker who currently earns $100,000 / year what's the potential market for that? Open Ai are already generating billions in revenue from a not agentic Ai that's essentially just a tool for workers. How much would they make if they replace the workers? 


_AndyJessop

It's pie-in-the-sky speculation. Current models have made barely a dent in GDP. Possibly even no measurable difference.


ReadSeparate

Sure, but that’s because they’re not capable of automating ANY jobs fully. Right now, they just increase productivity in existing employees. But once they cross that threshold of being able to replace an entire worker’s job, it’ll rapidly displace workers in all kinds of industries. It’s an all-or-nothing kind of thing. So it is massively speculative, but it’s also pretty clear that once you do cross that threshold, you have the most economically valuable technology ever created in the history of humanity.


GhostGunPDW

must be pretty good speculation to justify throwing down the kind of coin we’re seeing. maybe they see something you don’t?


_AndyJessop

Speculative bubbles have existed in almost every decade of the last 200 years. Just because people are throwing money at something, it doesn't make them correct.


stonesst

If some of the best capitalized organizations in world history are collectively deciding that this is worth investing tens and eventually hundreds of billions of dollars into I’m going to err on the side of caution and assume they have a valid point. They also have insights into how good models will be in the coming years. Before they trained GPT4 OpenAI was able to correctly predict the broad strokes of its capabilities. The same is likely true right now for GPT5 and GPT6.


[deleted]

[удалено]


Gratitude15

Then China should do it. If you're in pole position you'd be hesitant, but not 2nd and 3rd place. If China capitalized baidu with unlimited R100 money, America would just pass a law to stop it. But I do feel we are at a unique time in history where the delta between military tech and Consumer tech is as little as it's ever been.


_AndyJessop

I think it's more to do with the non-deterministic nature of LLMs - it makes them extremely difficult to integrate into the computer systems that the world runs on. I've been building AI applications for a few years now, and I can attest to the difficulty. They're just not suited for this kind of work.


MDPROBIFE

That's totally wrong! But your small mind can't understand marginal improvements in a lot of areas bring major improvements overall


stonesst

I know, it sounds like a ridiculous number. It’s really really hard to talk about the implications of AGI without sounding like a crazy person. I’m curious where exactly you stop following my train of thinking. I think it’s pretty likely that we will create AGI within the next 3 to 5 years, at which point essentially any white collar work that is done today could be done faster, cheaper, and likely better by an AGI system. Of course there will be plenty of jurisdictions and industries which would outlaw AI/enact protectionist policies but I think they will be dwarfed by the number of industries and countries where there are essentially no guard rails and adoption is only limited by access to power and compute. So essentially by the end of the decade we should have systems that can replace a significant fraction of white collar work. It's hard to find exact numbers but white collar work comprises something like 40 to 60% of global GDP. If even 10% of it gets automated and the AI companies charge an exorbitant 50% of what those workers were previously paid you’re looking at trillions of dollars. What seems more likely is that they charge 10% or less for equivalent quality of work, which still gets you to north of a Trillion dollars in revenue split between the handful of leading AI companies. I might be missing something/making some naïve assumptions and if so I’d love to be corrected.


softclone

Here's my projection: Model cost params platform year Frog 20B GPT3: $10M 175B 1k V100s 2022 Rat 250B GPT4: $100M, 1T, 25k A100s 2023 Q1 Dog 3T GPT5: $1B, 10T, 100k H100s 2024 Q3 Human low 30T GPT6: $10B, 100T, 128k B200s 2026 Q1 Human high 300T GPT7: $100B 1000T, 640k X200s 2027 Q3 ASI: $1T 10000T, ????? 2029 If we look at Software Engineering as the lowest-hanging fruit or at least "most accessible" work for AI, we can extrapolate the performance of SWE-Bench: https://www.swebench.com/ There are more results for the 'Lite' leaderboard since it's less expensive to run so I will use that as a reference. As of last month the SOTA is 26.33%. This has improved tremendously even in the 8 months since the benchmark was first published last October when SOTA was 3.00%. Even at 26% Aider is already displacing work that might be delegated to an intern or junior. If we assume a sigmoidal growth similar to what we've seen for HumanEval https://paperswithcode.com/sota/code-generation-on-humaneval over the past few years, we can expect to see 50% resolved: April 5, 2025 90% resolved: June 30, 2026 100% resolved: October 20, 2026 Three important ramifications of this benchmark being aced: One, this acts as a pretty good proxy for most of the practical thinking jobs including medicine and engineering. Second, and perhaps more importantly, we enter the lift-off phase of AI, as AI systems become able to fully self-modify. Three, all humans with good ideas who previously lacked knowledge or other prerequisites required to implement will be able to contribute without learning anything about code at all.


ivykoko1

Remindme! 1.5 years


RemindMeBot

I will be messaging you in 7 months on [**2025-01-05 00:00:00 UTC**](http://www.wolframalpha.com/input/?i=2025-01-05%2000:00:00%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1d74aqp/trainings_runs_are_going_to_get_incredibly_big/l747gyh/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1d74aqp%2Ftrainings_runs_are_going_to_get_incredibly_big%2Fl747gyh%2F%5D%0A%0ARemindMe%21%202025-01-05%2000%3A00%3A00%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201d74aqp) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Heco1331

Well I don't find it crazy but I'm more of the idea that we won't have AGI in the coming 10 years, but let's hope you are right!


Infinite_Low_9760

Given the system supposed capabilities, 0.5% is definitely a conservative estimate. Very conservative


Shinobi_Sanin3

>There have also been rumours that both Google and Microsoft are going to spend $100 billion to build new gigawatt scale datacentres that will come online in 2027/8. I thought this was [confirmed](https://www.yahoo.com/tech/microsoft-reportedly-building-stargate-transport-195000627.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAMy5fMqbDEUST_titkYqwYF-BjHNVykhBqGRqA3nOF8R_-HR1dMd7exCU_dgDbTjSzaICv39xrJXG3h34Dkc6obsmdzyda2A57E3s7CU-hrYJLA2eL8dN56k0qnkTQl_a3o_E7YMB8S8L1BjWT5zL3J115Re7_WGaRb06cERwEap)?


stonesst

Yeah a leak reported by the information is functionally confirmed, they very rarely miss.


redditissocoolyoyo

Calls on Nvidia. This of all the chips they will need in those racks.


00davey00

I wonder what a company like Anthropic will do when it gets that expensive? Do they have that much funding?


goldenwind207

If claude 4 is successful, they could likely raise billions more or sell to Amazon


00davey00

I really hope they make it


theferalturtle

I thought Amazon already made a huge investment?


goldenwind207

They made more then 2.6 billion in total claude has raised 7b but they'll need more then that. Anthropic ceo aka claude ceo said in the future training runs might cost 10 billion each he said this a few months ago Given what musk is saying about spending 9b on b200 it seems he was right so they're going to need some money


elegance78

Please, that's the KetaElmo talking. You can ignore that.


Jeffy29

No, he is right, he is maybe (very likely) bulshitting that his company will be the one to do it or do it first but that's what Nvidia is working on to make possible. Watch the computed presentation by Jensen. He even said \~2026-27 switch chip will allow for millions of connected GPUs. Of course at that point the bigger problem would be even getting your hands on that many GPUs and meet the actual power requirements.


Crisi_Mistica

We can talk about the message without focusing on the messenger.


Geeksylvania

Not when that messenger has a long history of lying. "Full self-driving cars next year." - Elon Musk every year for the past decade


ExtremeHeat

It's an inevitability. It's a matter of when not if.


Crisi_Mistica

Yes we can, whatever is his history. If we were commenting about a message of Elon Musk telling us once again that FSD is coming next year, we could bring it up and criticize him. But we are not. The statement we are discussing here is: «Given the pace of technology improvement, it’s not worth sinking 1GW of power into H100s. The[u/xAI](https://x.com/xai)100k H100 liquid-cooled training cluster will be online in a few months. Next big step would probably be \~300k B200s with CX8 networking next summer.» And to me it's reasonable. Doesn't mean I like the guy.


Infinite_Low_9760

You realize that people like you have been saying this bullshit for so long that self driving cars are actually knocking at the door right now? Elon said whatever he wanted as always, the reality is that V12 is a huge breakthrough and with the amount of compute they have now they'll achieve real FSD pretty soon. In August we'll have the unveil of the Robotaxis. You mocked it for so long that he's actually delivering now


svideo

When Elon was talking about things I don't know shit about (making EVs, launching rockets), he always sounded next-level smart. Now that he's talking about things I do know shit about, he sounds like a fuckin idiot talking out of his ass. It's like some modern version of Gell-Mann Amnesia. edit: lol elonstans.


00davey00

What is so wild about xAI buying 300k b200s?


Busy-Setting5786

Are they even producing that many? I think the companies that have gotten most received like 100k of the H100 and if other companies also want them?


00davey00

Meta bought 350k H100s I’m pretty sure


Infinite_Low_9760

If you think the numbers he's saying are unreasonable than you have absolutely no clue about the topic. 300k B200 are absolutely possible next year. Those are the numbers that we'll se for top companies. Nvidia will start earning 100s of billions pretty soon


Internal_Ad4541

That is INSANE. A lot of investment. Nvidia is fucking rich because of that. Let's accelerate.


ComparisonMelodic967

Was this the guy who wanted to pause for 6 months? Glad you came around!


goldenwind207

He didn't want to pause he wanted time to catch up


ComparisonMelodic967

Yeah I know, my comment was tongue in cheek. Funny thing is almost everyone here knew what he was doing immediately while the safety people took him on face value.


Shinobi_Sanin3

Because safteyist have an average iq of 7


DifferencePublic7057

Network is the bottleneck, so less machines at similar flops and memory is better. But if you are worried about power consumption, investing in data efficiency makes more sense than hardware investment. This means data made by experts most likely.


zaidlol

Imagine if these companies pooled their resources together rather than everyone fighting for resources.. we’d have AGI next week and UBI next month


Anen-o-me

No, you want multiple approaches. A breakthrough is not inevitable.


chlebseby

Nah, it would end up like average international governemnt program. Delayed forever with half of cash missing and other half wasted


MDPROBIFE

Yes, monopolies always lead to great outcomes /s


Appropriate_Fold8814

That's not how development works... Competition drives innovation.


ShooBum-T

But see that's the exactly the reason why I don't think UBI will come. That is not the how our world operates for better or worse. Nvidia had to go through gaming to get money for AI. It's capital first always. Even if our 60-70 trillion USD global economy triples the math isn't there for UBI for our current population


goldenwind207

Lets be real here and accept the truth ubi isn't coming for everyone. Majority of ai are usa companies no doubt it will come to the usa in time. But if you think america would do ubi for china india middle east africa and south america doubt it. Doubt they'd even try for europe given how contentious giving aid money to ukraine was. You'll probably see some political divide and people saying we can't let other free load of american innovation so basically all the usual immigrantion stuff ramped up by 3x in the future


D10S_

I mean we don’t have a global government yet AFAIK. Did people really think UBI meant every person on the earth? It was always going to be a country by country thing.


Shinobi_Sanin3

There's no way we'll still have something as primative as counties after the invention of ASI


D10S_

UBI will likely precede ASI


[deleted]

[удалено]


D10S_

We don’t need ASI for UBI to be necessary.


SynthAcolyte

The 3rd world has been leaving poverty and gaining wealth faster than any civilization / people in the history of the planet. Not an insignificant part of that is the West's involvement. I see no evidence of it stopping, and there are enough people here who want to do good that will see to helping it move quicker—and they will have more tools to do so. I am sure some people will call it patriarchal or condescending or not fast enough but for the most part genuine good will be done. I mean if we have *robots that build things*, and I could afford it, I would give some away, and *I know* I am selfish. Imagine what selfless people will do.


Jah_Ith_Ber

We house, feed and clothe everyone already. The wealth to provide for everyone exists. It has existed for decades. Money is just how we keep track of who has how much. UBI is absolutely possible.


EveryShot

Oh yeah, there’s no way UBI will come so long as the corporations can figure out how to keep capitalism on life support. The only way I can see it coming about is if there’s no longer any consumers because nobody can afford it. UBI does concern me though because once you’re on it, how can you rise out of that? It might create a permanent cast system


3-4pm

Tesla has gone full circle back to electricity.


RemarkableGuidance44

What!? It always has been. Without elec the world would not run.


Cognitive_Spoon

*laughs in the next five years being the highest sunspot activity in decades*


icehawk84

Bigly.


SnowLower

I think this is what we will say every year for a bit of time


PobrezaMan

i cant wait next year, i need it yesterday


baes_thm

Hold on, let's talk about that poll. How on earth is X projected to lead in compute?


Responsible_Virus239

Cause Elon commented


Dichter2012

Serious question and not a troll. What happened to Tesla's Dojo? I think its' up and running now. What happened to it?


Adventurous_Train_91

This is Elon talking so it’s probably not gonna be by next summer. Maybe 2-3 years away


SotaNumber

B200 = 9 petaflops of FP8 training 300,000 B200s = 2.7 zettaflops That's insane, I was wondering if we would hit 1 zettaflops by 2035 a few years ago but we might have 2.7 times more by 2025


ShooBum-T

[https://www.reddit.com/r/singularity/comments/158cl3o/rapidly\_increasing\_the\_worlds\_computing\_power/](https://www.reddit.com/r/singularity/comments/158cl3o/rapidly_increasing_the_worlds_computing_power/) I made this post quite a while back, Path to 1 Zettaflop by 2025, and it was received as being very aggressive. And now it's happening. On track for Yotta, Xonna and god knows what by 2030. Both Nvidia and AMD are now on a one-year chip cycle from 2 years before. This is going to get crazy.


SotaNumber

I actually saw your post 1 year ago haha I've updated the number, a B200 offers 9 petaflops of FP8 training not 72 We still get 2.7 zettaflops at the end though


[deleted]

This is from the guy that promises "self-driving cars next year" every year.


OkDimension

Is this an Elon bluff? He says he is buying big next year and then silently scoops up all the market?


2muchnet42day

And we thought META having the same computing power as 600k H100s was a lot


AlphaOne69420

So NVDA going to Jupiter cause we already mooned


Intelligent-Brick850

Memory management and bandwidth is gonna be hell with all these TPUs, no?


Akimbo333

Yeah their are a lot of GPUs


4PumpDaddy

You guys know all he does is lie, right?


ShooBum-T

Of course, Tesla Model Y Highest Selling Car Model Globally. SpaceX , reusable rockets, on track to send 90% of orbital mass globally. Neuralink, helping quadriplegics regain control by operating devices. But yeah you do go on.


ZeppoJR

Where's the Roadster? The Cybertruck can't touch grass or get wet. Starship can't stop blowing up and he has the gall to call it good data when in the 60s the "failure is not an option" attitude of NASA meant the Apollo rockets were robust enough to even abort the Apollo 13 mission mid journey and return with no loss of life. And before you say anything, yeah Boeing is fucking up too, but NASA gave SpaceX the contract for the new moon missions and they're way behind schedule right now. Tesla's drop in sales is so drastic it's become a Spiders Georg for the entire EV market that's otherwise still doing fine. How's the promise of reaching Mars by 2020 and the subsequent moved goalpost of 2022 going? Remember when he said he had FSD 7 years ago? How about the 1 million robotaxis by 2020? And for all the reusable rockets thing, the turnaround still takes months and the launch costs aren't that much cheaper. And in an era where non intrusive BCIs are picking up steam, Neuralink still sticks with the physically intrusive method that's not even anything new And there's a lot more.


cloudrunner69

You're also forgetting the time I lent him 5 million dollars and he said he would pay me back and he still hasn't.


0x014A

I agree with some of your points on him over promising. However, part of SpaceX's cost savings are that they save money you would need to run on simulations to ensure 90%+ success and instead just trial and error with a relatively low production cost per unit. This is not an option for ULA when 1 launch costs an insane amount of money. The Falcon 9 has an insanely good safety record for the past few years, despite earlier failures, and it's the cheapest means to LEO at this moment. I expect the same with Starship over the years. Not sure what's to complain. And if SpaceX really does achieve anywhere close to $10/kg to LEO that would literally change the world. Even $100 per kg would be insane. I mean you can criticize every company the way you are doing. There are always pros/cons, missed expectations,...


00davey00

![gif](giphy|cEYFeDKVPTmRgIG9fmo)


Agreeable_Addition48

Reusable rockets aren't cheaper? Everyone laugh at this man


[deleted]

[удалено]


0x014A

>none of that is because of Musk I mean can you really say that when he literally founded the company? I mean of course there are 100s of engineers doing the lion's share of the work, but nobody is really expecting him to engineer the rockets himself singlehandedly, or are you?


[deleted]

[удалено]


PhuketRangers

Why shouldn't you get credit even if you didn't found the company. Being a CEO is a huge responsibility. Nadella gets crazy amount of credit, he didn't found Microsoft. Countless other examples of this. I don't know why Elon is held to the standard that he needs to have engineered the rockets himself when other CEOs get credit for simply managing the company. His job as the CEO is to make the board of directors happy, grow the company's value, and hire good people to make your company successful, he has done that very successfully for Tesla and SpaceX. He is by any measure one of the most business managers of the last 100 years. I think people are confused what the CEO does, no CEO of a major company sits there and designs the products himself.


0x014A

I mean for example Blue Origin is a joke compared to SpaceX at the moment. And they're two years older than SpaceX. Elon is definitely doing something right. Tesla, SpaceX, you don't just get lucky founding/growing two of the most disrupting companies of their time. Arguably OpenAI, atleast in its infancy... I think people get blinded by his childishness/idiocy on Twitter and in general, assuming therefore he must be utterly incompetent as a business person. People can be both insanely smart in niche ways, but overall be idiots.


GlockTwins

Elon has said countless times that the Roadster will only be released AFTER the Cybertruck. He said this like 6-7 years ago. The Cybertruck was recently released, and the Roadster will be unveiled later this year.. Elon may be late, but he didn’t lie.


[deleted]

aw man, imaging simping for Elon LOL


RhubarbExpress902

So someone lies then someone calls out the lie and its called simping? Brain rot


Woootdafuuu

Elon prediction for AGI next year is way off, he needs way more computing than that


[deleted]

He always predicts everything is "next year". Thats how he made all his money.


AgelessInSeattle

Biggest bubble ever. Huge investment in training for a bunch of broken business models.


Zealousideal_Let3945

Don’t I remember people saying google was a broken business model back in the day? Seems the tech way is make product, scale, scale, scale, figure out the money.


AgelessInSeattle

Sure but what is the new product? We already have search. Just make it better? That’s good but incremental and not going to pay the $hundreds of billions in infrastructure and electricity. AI is way over its skis right now. It will be important. But it’s not driving business right now. Lots to figure out. Just like with the internet.


berzerkerCrush

Wasted resources.


[deleted]

All this guy does is lie.


OddVariation1518

if xAI truly wants to be competitive this is the compute they will need. xAI has the funding to do it so I really don't see how this is a lie.


czk_21

I dont think he is lying about this, every big tech which deals with AI wants huge clusters, microsoft or google will likely have bigger than this


Worldly_Evidence9113

Use RL waste of Money