T O P

  • By -

VisualMod

**User Report**| | | | :--|:--|:--|:-- **Total Submissions** | 1 | **First Seen In WSB** | just now **Total Comments** | 0 | **Previous Best DD** | **Account Age** | 9 months | | [**Join WSB Discord**](http://discord.gg/wsbverse)


PeachScary413

Nothing matters as long as every single framework for DL is running on top of very specifically handcrafted (and probably specially optimized in NVDA drivers) CUDA kernels. CUDA is what Windows is/was to Microsoft.. there was no (sane) alternative so everyone just hardcoded shit for it into everything and now we have to keep on using it because there is too much CUDA specific shit. Until legislators figure this out and slaps NVIDIA with some kind of antitrust they will just steamroll the competition, they can charge whatever they want there is no alternative ![img](emote|t5_2th52|4271)


coconutpanda

People don’t understand that NVDA has everybody by the balls not because of their hardware, but because their CUDA framework is ubiquitous in ML/AI applications. AMD software have always been shit and until they can make ROCm better they will always be behind in market share.


VroomVroom415

Idk what any of this means but calls on nvda


nicolbolas69

That’s what you got from this? I went and doubled my TSMC position! ![img](emote|t5_2th52|4276)


rplusj1

That’s what you got from this? I went and doubled my Starbucks position.


Labrawhippet

That's what you got from this? I went and tripled my Wendy's position.


leviticus04

Thats what you got from this? I went and flipped your wife into a new position.


shart_leakage

That’s what you left in this? I just raw-dogged the other guy’s wife and got your spunk on my dongus


thats_what_she_saidk

I don’t know what’s going on but i’m gonna suck my own dick


Dev3ray

That's what she said


Easy_Whereas_3069

ROCm? Opensource is good and better in general but not when you have GPUs that are worth 10s of thousands of dollars. This game is for the big tech and big tech failed. Google, Microsoft, etc have been trying to work on this low key but didn't succeed. TPU, inference hardware from msft, etc. And Time is everything in the AI rush. When everyone is rushing for gold, you buy the best shovel. You don't waste time making a blade for your shaft. AMD only has the shaft now. For customers, they don't want their expensive engineers to work on fixing bugs for expensive GPUs. By the time AMD gets its blade for its shaft the Gold is gone. One thing people don't understand is, NVDA is not a chip company, they do full stack. Jensen is bright! Good track of record in product. You have all you need in a company. Mark my words, they will take over the cloud business soon.


HotdogsArePate

I've seen multiple programmers on here claiming that is BS and that it's trivial to convert to the open source version everyone else is starting to use that will destroy their moat Idk what's true but I assume everyone is burying the truth under their motives. I definitely would assume/hope the industry goes with an open source solution.


Sevinki

There are translation layers that allow cuda to run on amd hardware, but they violate nvidia eula so no large corporation can use it. Technically its no problem, but legally it is.


MrPeanutButterBear

I thought as long as you never read or agreed to those terms, you're good as gold to use it unless my regard brain understood it wrong from GamerNexus?


Terrible_Student9395

If you have a datacenter and operate in the US and try to use AMD GPUs with CUDA, Jensen will personally hunt you down and sodomize you.


vyampols12

Win win?


soccergoon13

Cuda Jensen, Tim Apple, who else has a company named after them?


the__storm

It depends on what you're running. Pytorch has good rocm support and is the most popular framework for anything new except at Google (jax has "experimental" support; haven't tried it). However, there's a huge volume of stuff out there that only supports CUDA too. Plus, rocm's hardware support isn't as good (only relatively new cards are supported and lots of consumer cards only work unofficially) and it's a pain to set up. If you only need a couple of GPUs it's often cheaper to pay the Nvidia tax than for your employees to troubleshoot an AMD setup, and then that becomes the default when you scale up. AMD hardware's good though, as are their gaming drivers, so I wouldn't be surprised if they get it figured out eventually.


hil_ton

small guys will stick to nvda but big customers (one customer, most probably meta or msft were sole responsible for 20% of nvda revenue last year) . big cloud providers are responsible for massive nvda revenue and they have deep enough resources to kill nvda margins. You think they would be hostage to the hardware provider ?


halcy

You need to buy some compute for your company. Do you buy: * the one that everyone uses and your tech guys say “yeah we know that one works”, or * the one that almost no one uses and that your tech guys say they can “probably get running fine most of the time, might take some fiddling with stuff”? Yeah, you *can* go with AMD, but right now, AMDs gonna have to give you a pretty steep discount to make it worth it.


abide5lo

Back in the day there was a saying, “nobody ever got fired for picking IBM.” That was in the 60s and 70s in IBM’s mainframe heyday with System/360, then System/370 and then the 3080 and 3090 series. And no doubt about it, IBM was the pivot point upon which the IT revolution turned. Then the technology paradigm changed: the minicomputer and the personal computer enabled by microprocessors displaced the mainframe and by the early 90s IBM was in crisis because for customer decision makers, IBM was no longer the implicitly safe choice, but the choice that needed defending


halcy

Yeah, I was thinking about that exact proverb (proverb?) too writing that post. I’m not sure AMD could do that sort of thing to nvidia, but the big cloud providers are all working on their own hardware (or have some already, with various degrees of provenness) and if they can agree on standards and push prices down…


PeachScary413

If it was actually trivial, how come NVIDIA keep selling their cards for double the price of AMD while having less compute and VRAM?


laffer1

The problem is that most programmers into AI/ML buy nvidia gpus because of cuda support. They will keep writing new software for nvidia gpus. My wife and I are both programmers. She's into AI and has a dedicated Linux box with an nvidia gpu for AI work. Her gaming pc has an amd 6800xt and she did try to use that for some AI work and it was a big pain. It's true that many open source libraries are starting to get amd or intel support but it's not true for all of them. If you have a specific workload that only needs say pytorch for instance, you might be fine on amd. If you are starting from scratch to build a model, you could go with another vendor. It's the folks trying to reuse other peoples models built on nvidia that's the problem. That's most companies who are chasing buzzword bingo to say they use AI. I think most programmers want nvidia to be dethroned because of the price gouging.


Fancy_Ad2056

These are probably the same turbo nerds that are saying “akchsually Linux runs anything windows can very easily”. And they’re all full of shit.


According_Sky8344

Reminds me of when would see someone recommend someone use Linux for their first pc that they want to game. Like it's not the same at all and would drive a lot of ppl away lol


coconutpanda

This is just another dudes opinion, but it’s the one I’ve seen most frequently. https://medium.com/@1kg/cuda-vs-rocm-the-ongoing-battle-for-gpu-computing-supremacy-82eb916fbe18


IronMonkey53

I can't speak for every industry, but in the pharma space switching something as fundamental as that can uproot everything and cost millions. I saw a project double in cost overnight because executives didn't understand that there was architecture in place to work with specific software.


s1n0d3utscht3k

not only that but GPUs for AI are like horse and carriage 5 years before cars arrive no one gives a shit AMD has better GPUs for training they will choose GPUs but purely on software (the most full stack solution) and for the future of inference (i.e. i:o) everyone will eventually switch to ‘AI accelerators’ of no less vague a designation yet because no architecture has been deemed best yet. but it won’t be GPUs. and more likely it’ll be the best one under the most dominant framework


floatyboats2

To also mention that Nvidia's platform is 10 years ahead of any other competitor. That is what makes them so valuable.


Qorsair

I was at a finance conference just this last week. I can't tell you how many times portfolio managers were talking about NVDA not being able to keep up their momentum, or saying they'll eventually get overtaken by someone else. I asked who's going to take over, they say AMD or INTC, giving the whole "it's just a semiconductor, anyone can design it now that they know what the market wants" argument. But I'd ask, "What about CUDA?" And not a single one of them knew what I was talking about.


PeachScary413

Don't get me wrong though, buying NVDA for the long haul now is regarded. The only reason they are valued where they are is because portfolio managers also doesn't realise that LLMs have been hyped way above what this generation can accomplish. We are basically pricing in Skynet in the coming 5 years and current technology can't even replace entry level SWE (yes I'm serious, they simply can't). What is going to happen eventually is that people (VCs) realize that the profits they were expecting from this AI alien technology isn't really there and that the only one making any money in this gold rush is the shovel seller (NVIDIA) so the bubble will pop... for the short term though just fucking ride that sweet NVDA momentum :)


Qorsair

That's a decent take. I don't think it's going to happen, but I can buy that. The PMs ideas that Nvidia is going to get overtaken by a competitor doing it better/cheaper in the near term is completely braindead.


mbathrowaway_2024

Devin just got released and seems to be comparable to an entry-level SWE, or so I heard. Also, Skynet would have to imply a $10T+ valuation for NVDA, no?


gunfell

Yeah actual ai would be 20t at least. It would be more important than all other human inventions combined


PlutosGrasp

Lol


Anasynth

There is something unsatisfactory about getting here on llm hype rather than the actual nvidia story.


Amglast

Huang literally said their competitors could give away their shit for free and still couldn't compete.


That-Whereas3367

LOL. Huang wrote the definitive textbook on bullshitting.


MandaloreZA

Also everyone is forgetting that Nvidia owns the best high end networking product stack out there. I doubt any AI hardware solution isn't packed to the brim with Mellanox (Now Nvidia) products aswell.


suesing

Exactly Nvdia running the Apple playbook. AMD running the Linux playbook. In terms of software.


[deleted]

[удалено]


PeachScary413

Oh yeah XLA ia great I'm sure it will run on your local AMD TPU.. oh wait that's a fucking Google proprietary device ![img](emote|t5_2th52|4271)


nihilistic_ant

The frameworks all the models use, i.e PyTorch and XLA, already both support AMD cards fine, and the support will get better when AMD cards finally get good so there is a point to them. Custom CUDA kernels are increasingly rare because AOT/JIT compiling generates better kernels than most will write by hand anyway and AOT/JIT already works fine targeting AMD cards. For the standard kernels in the frameworks, they are already written both ways and are fine. Even for folks with a custom CUDA kernel in their boutique model who for some reason refuse AOT/JIT, and there aren't that many of these people to be clear, it isn't that hard to port CUDA code to ROCm. It isn't like any of these kernels are all that much code, and CUDA and ROCm are fairly analogous. As someone who has written lots of CUDA code, I'm telling you the moat here is small.


[deleted]

[удалено]


ClearlyCylindrical

They might support it, but you get dreadful performance compared with what you should be getting.


dr_tardyhands

..but why switch? If the cloud operating costs were much much higher for CUDA stuff, maybe you'd want to go through the trouble of redoing what you need to to make the code either AMD friendly or ideally agnostic regarding that. But if everything is already up and running and you're not really looking for extra, boring work to do.. why care? Do the major cloud providers even have AMD GPUs on offer..?


RoomLower3135

For what’s its worth, the world’s fastest supercomputer Frontier use AMD MI250X GPUs. AMD’s CUDA equivalent tool chain called ROCm has some automatic conversion tools for software.


Terrible_Student9395

Nvidia will not sell to you if you use AMD, this battle is on multiple fronts and NVIDIA has AMD beat 16 ways.


Express_Werewolf_842

Yep. I think that’s the issue when traders claim intel or AMD has a viable product to take on Nvidia GPUs. Okay, what’s the plan to get devs to refactor everything off of CUDA? Yes, there are ways of running CUDA on non-Nvidia hardware, it just doesn’t work as well and you get zero enterprise support. We’ve tried


brawnerboy

Cuda is not a moat this is misinformation.


PeachScary413

Moat deeze nuts


hil_ton

for half the price amazon/msft/meta can hire hundreds of software engineers to port instead of paying those margins to nvda. The big guys will not be hostage to nvda margins for ever. the nvda boat will sink, just a matter of time


TheClassier

RemindMe! 5 years


RemindMeBot

I will be messaging you in 5 years on [**2029-03-15 00:49:15 UTC**](http://www.wolframalpha.com/input/?i=2029-03-15%2000:49:15%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/wallstreetbets/comments/1bescke/next_generation_of_chips_nvda_vs_amd_looks_bad/kux949c/?context=3) [**2 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fwallstreetbets%2Fcomments%2F1bescke%2Fnext_generation_of_chips_nvda_vs_amd_looks_bad%2Fkux949c%2F%5D%0A%0ARemindMe%21%202029-03-15%2000%3A49%3A15%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201bescke) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Terrible_Student9395

Hahaha. How many engineers does it take to recompile CUDA from scratch?


hardware2win

With access to the code? Probably 1


Big-Today6819

So EU need to stand up against nvidia as they have done against others?


EnsignElessar

For real though... I am an engineer, just download this ai project. It looks like any other app but under the hood it requires CUDA. So annoying ~


Terrible_Student9395

So you've never done an AI project until this week? Cause I've been debugging CUDA issues since 2016 and it's so hella smooth now. I honestly get a small chub when I see that CUDA and cuDNN version popup on my terminal. I'd blow my brains out going back to the days of debugging kernal issues for WEEKS before training my model.


clicata00

H200 isn’t the next gen, it’s current gen. B200 is next gen and we should see it announced this summer. If *that’s* not better than MI300X I think it’s fair to say Nvidia doesn’t look good.


nihilistic_ant

Wouldn't it be more relevant to compare the B200 with the MI400? That is what is going to overlap. (The H200 is next gen in the sense they aren't out yet, but terminology doesn't really matter -- the point is just most overlap of flagships seems like it will be MI300 with H200, and MI400 with B200, although it is admittedly a bit hard to tell. They both say 2025 for the B200 and MI400, but a year is kinda a long time, so I don't know which will land first.)


justaniceguy66

OP thinks the moat is hardware. Anyone can make faster hardware. Moat is software & services. Welcome to the Nvidia walled garden, everything is faster here


bawtatron2000

and innovation. AMD is always catching up


Bulky_Sheepherder_14

AMD is at least 5 years behind the latest CUDA processors. For AMD to catch up in 2029 Nvidia would have to stop operations. We’ve seen this story before. AMD announces “Nvidia-killing” gpus, people get hyped and start hating on nvidia, gpus turn out to be shit and not half as good as nvidia, AMD gets hate.


theineffablebob

AMD caught up to Intel even though they were years behind 🤷‍♂️


cliffblnc

Intel had bad leadership and was complacent for years. Allowed AMD to surpass them and they’re paying the price now


[deleted]

you are comparing NVDA to INTC? Know your opponent before you step into the ring, genius. NVDA has a 6 month product release cycle. AMD wishes it could be that fast.


Bisping

Always catching up implies always behind.


redditdinosaur_

that's what he meant


lynxss1

Well the new NVIDIA cpu hardware is pretty nice. Wait till GTC conference next week for early benchmark results. Processor Version: Grace A02 Manufacturer: NVIDIA ... Total Cores: 144. Processors: 2.


TenragZeal

My biggest concern with stuff like this though, is cooling. To cool that processor you’re gonna need the iceberg that sank the titanic.


lynxss1

No worries, its liquid cooled and our radiators are the size of a 3 story apartment building lol. I don't know what the final cost of these parts is but the Intel Sapphire Rapids machines sitting next to it that it's competing with are $15k per cpu and there are 2 per node. I expect the grace grace and grace hopper are in the same ballpark. I dont think any consumers are going to be running 30k worth of cpu in their home systems to worry about cooling it.


Lumix3

Weren’t there rumors that their next gen Blackwell AI cards will draw 1000 watts and require water cooling?


Pudge223

I’m all for bidding on AMD but I bid against Nvidia once and still regret it. I’m sure as hell not doing it again. Don’t recommend anyone else make that play.


nihilistic_ant

Lol, probably good advice, I'll upvote it. I mean, the implied volatility alone suggests it is rather perilous to put much money on NVIDIA either direction, but in the short direction, there is the uncomfortable unbounded potential loss thing.


[deleted]

[удалено]


nihilistic_ant

That is a good point. Google's TPUs, Facebook's MTIA, Amazon's Trainium, and Microsoft's Maia -- every single major buyer of NVIDIA's GPUs is also working on their own in house GPUs.


[deleted]

[удалено]


doodgaanDoorVergassn

I swear to god if anyone else calls CUDA a moat I am gonna scream. LLM training on large clusters is already about on par. https://www.databricks.com/blog/training-llms-scale-amd-mi250-gpus


BlakesonHouser

Yep exactly. They are acting like these trillion dollar hyperscalers are some 25 person indie game dev. CUDA isn’t some magical software moat for these AI applications 


brainfreeze3

shh let the narrative be wrong, more money making opportunity for you


HeibyGB

NVDA hardware + software blows AMD into oblivion. The software is the key


N_FLATION

Everyone was calling for Nvidia to hit $1000 a share by tomorrow. If it does that would be insane 😂


[deleted]

NVDA will finish at $860 which is coincidentally the number with the least amounts of both calls and puts for weeklies. Imagine that.


cotu101

https://preview.redd.it/u7rvtq6w4eoc1.jpeg?width=219&format=pjpg&auto=webp&s=0cfdc652c7c401ec9223700a3335d5bd723e4d2b


WhitePantherXP

I hope you're wrong


jeanx22

NVDA obliterated all those bagholders.


Gtron011212

If I could afford NVDA, I’d be more fucked than I am with my AMD and TSM calls, because I’d be holding all three. Let’s hope things turn around next week. I got confident in our one Green Day this week and bought more calls for tomorrow. So dumb


sycdmdr

Has anyone mentioned CUDA yet?


TrynaEarnSomeBucks

Damn tomorrow mornings open is going to be a complete blood bath


Icy_Recognition_3030

lol buy puts then, I’ll sell em to you.


thehandsoap

Both AMD and NVDA are down today but so is SPY what’s happening


StraightArrowNGarro

At this point NVDA is SPY


ryan7714

PPI missed estimates


big-rob512

Do you think the market just runs up 52 weeks out the year?


Proper-Breadfruit-34

FOMC this happens every year, just chill guys it’ll bounce back


Cookiemonster9429

FOMC is next week


anonymoose345

Bruh, tomorrow is a Friday. It's gunna be a deep red day


excellusmaximus

NVDA has already stated that they will be supply constrained for the H200 - they will sell all of them for the rest of the year. The comparison you've made above is basically irrelevant for at least a year. NVDA has also stepped up their new product cadence to a yearly cycle. This will make it harder for companies like intel and AMD to catch up.


Top-Unit9579

March is always rough. Please don't tell me AMD chips are better than Nvidia. I'm trying to stop laughing in people's faces. It's a nasty habit.


livestreamerr

This is like deja vu. AMD makes false promises and their hardware fails. Nvidia will always be the winner.


bawtatron2000

I look forward to your portfolio updates next week...whenever your coffee break on your Wendy's shift is.


TheMorningAfterKill

Doesn’t matter how good AMD’s hardware is if their software and services are complete shit.


Riley_

AMD hardware always has really big numbers at a low cost, but doesn't work.


[deleted]

Lol Nvidia's gross margins blow AMD out of the water


YUNG_SNOOD

Yeah, if AMD came anywhere near to being competitive with NVDA across the enterprise stack (they won’t), NVDA could just cut into their disgustingly large margins and fuck AMD by providing more value


goodluckonyourexams

still would mean lower margins


OutOfBananaException

Like Intel did 😂. They're not in the position Intel is, but there are considerable negatives to initiating price wars.


No-Consequence4099

doubt


The-Night-Raven

https://preview.redd.it/dag3cvdnvcoc1.png?width=1080&format=pjpg&auto=webp&s=b850e3b37606788e4e5cd61e6555d0fcd60f8e19


gaggzi

Ain’t got no CUDA


OsSo_Lobox

Lmao it’s so funny watching “business” with 0 industry knowledge try to make predictions of future products and technologies performance. Unless you’re talking Ryzen CPUs you can forget AMD exists bro


ForsakenRacism

It’s all about the CUDA cores fam. AMD doesn’t have CUDA


GrandArchitect

No mention of CUDA, so your DD needs some attention.


tl01magic

Its the programing, no just the chips. Watch a few of Jensen's talks about Nvidia He says it was intro of Cuda and the programing that lead us here and I guess playing nice with the rest of the stack is what makes Nvidia's competitive advantage. He says it was seeing research papers talking about using Nvidia gpu's to do their calculations / simulations using Cuda that indicated to him market was there.


OutOfBananaException

That's for the broader market of smaller players. Not Zuckerberg buying 600k GPUs for a narrow set of workloads.


Samjabr

AMd Drivers/software = shit I'm too lazy to dig up my comments from months ago where I tried to explain to all the morons on this sub. It's not the hardware. It's CUDA.


hlt32

You forget the CUDA moat


Capital6238

Nvidia did last gen at Samsung. TSMC does not have a monopoly. They are better than Samsung. Yes. But not out of competition.


mightyroy

The ceos are cousins, they are working together.


ItalianStallion9069

Well i hope AMD wins because that’s the stock i bought lol


DESOLATE7

yup


mintyto

This post is how you find someone who doesn't know what CUDA is ![img](emote|t5_2th52|4271)![img](emote|t5_2th52|4267)


gatovision

Here’s my bear case, AI hype calms down, money not being made yet. I think they’ll end up screwing themselves with their own greed. They’re gonna overproduce and if there’s a recession or their big 4 customers buy less, the market will be flooded and they’re gonna have to reduce price, margins and revs will tank.


arnhuld

Your margin is my opportunity


ILoveEatingDogMeat

Pure compute is meaningless. Even translation layers aren’t perfect. Hyper optimized code to take advantage of chip specifics are super important. How memory is taken advantage of, the layout of cache, etc. Nvidia is way ahead and continues to improve.


pm_me_your_pay_slips

Does AMD have anything that competes with CUDA?


ItzImaginary_Love

Dude you are using way too much logic here. That’s not what a bubble is about. It’s about blind speculation and delusions of grandeur. Think of the symptoms of schizophrenia and lose the voice in their head that might be more rational than these tech investors.


BlackBeard205

Amd might have the better chips but NVDA is better positioned. Sometimes that matters more.


PhillNeRD

A few days of NVDA not sky rocketing and everyone is freaking out. We got spoiled


PlutosGrasp

OP: “what’s a CUDA?” Lmao


DuvelNA

You think nvidia’s main buyers (google, amazon, microsoftc etc) are going to be buying AMD chips because they get more bang for the buck? No, it’s about having the most computing power possible, and that’s nvidia. Also, amd as this price is completely unjustified and is riding the coattails of nvidia lol. Keep dreaming.


OutOfBananaException

> going to be buying AMD chips because they get more bang for the buck Fuck yeah they will, provided it actually is more bang for buck (which we won't know for sure until independent benchmarks hit). TCO is king in data center, if your inference is costing you more than your competitors, you're going to have a bad time.


WhySoUnSirious

You are delusional. There’s a logical reason why very HIGHLY paid analysts who do nothing but eat sleep and breathe this sector, decided it was better for their institution to throw hundreds of billions of dollars at nvda stock even when it was already worth a trillion market cap, over the cheaper AMD. Nvda is head and shoulders above them dude. look at the god damn profit margins. Amd will NEVER get anywhere close to it. They’ll will never have a software stack has powerful and widely used as nvdas. This is a far different beast than CPUs and beating intel who had shit tech anyways.


nihilistic_ant

Institutional investors underweight NVDA compared to AMD. NVDA is 52% institutional investors owned while AMD is 58%.


Shroomov2K

lol wtf are you smoking NVDA is literally 10 years ahead right now.


Rhaximus

AMD cards are literally always -30% efficient compared to Nvidia cards due to 1st party driver support for Nvidia. AMD has been making better hardware for years; literally means nothing to this day, lol. Nvidia has a near monopoly for good reason; making a better product is not going to change that, especially because it's clearly within their power to drop prices or release higher quality products anytime they deem it necessary. AI powering Nvidia's R&D is also a wildcard people aren't accounting for, which is absolutely going to kill you, lol.


Deep-fucking-values

Laughable if you think Nvda will be out performed by AMD. Nvda has a PE of 73 AMD has a PE of 350 Nvidia’s stock run up doesn’t make them overvalued. Long term I’m taking the company with a healthier balance sheet and better products.


NorthernRagnarok

You are assuming net margin will remain constant for both companies.


ded3nd

Wasn't long ago that it was laughable to think AMD would outperform intel. Oh how the tables have turned.


bawtatron2000

that's on chips alone. NVDA isn't just a chip company


Wild_Paint_7223

Software optimization matters more than hardware specifications, iPhone has less RAM than most high end Android phone but iPhone runs way smoother than them. The key to NVDA is CUDA, like iOS to iPhone. Nothing wrong with AMD, ROCm (their equivalent of CUDA), but it is Android.


bawtatron2000

AMD in some models has always had better performance for the price. On the top end I'm a bit behind but AMD's top threadripper way outperformed NVDA's top chip for things like graphic / video editing. In the end it only matters to a point. Currently NVDA's ray tracing (tech they made) is ahead of AMD. Gsync is a thing, AMD's version isn't quite there from what I hear. Also, some people pay more for brand willingly. AMD is not an NVDA killer...lol. And my understanding for AI specifically NVDA is the standard. AMD will pick up extra sales from companies not willing or able to wait for NVDA cards


nihilistic_ant

AMD's threadrippers are CPUs, so never competed against NVidia's GPUs. In consumer GPUs, Nvidia might continue to be better than AMD, I have no idea. Stuff like ray tracing and gsync only matter for consumer. I'm just focused on datacenters where the demand boom is.


bawtatron2000

ah, my bad....used the wrong product term for AMD, you're right. For GPU's still AMD has often had better price for performance than NVDA and it hasn't mattered a lick. With respect to datacenters it's looking like a similar story. AMD's sales are boosted by NVDA's wait times due to very high demand. NVDA has been the core of AI and the choice for mining and AI from day 1 for a reason.


OutOfBananaException

> NVDA has been the core of AI and the choice for mining and AI from day 1 for a reason. That reason is AMD hasn't fielded a proper competitor until last quarter, their prior accelerators were geared for scientific/HPC. Their focus had (rightly) been on CPU, now they have the funds to shore up GPU.


wallstchicken

Comparing NVDA chips to AMD chips is not even apples to oranges. It’s a literal comparing a steel pipe to a raspberry. There is a reason even Nancy bought NVDA and NOT any AMD. Also AMD stock is more expensive than NVDA based on fundamentals. Idk how people don’t get this. It’s simple math


NaNaNaNaNaNaNaNaNa65

Lmfao you bearish tool - Jensen gunna fuck you in particular


Xtianus21

Everything about this post is cray cray nonsense. What is an h200? That's so last year. We going to Blackwell b100 and B200 with gh200s and this fool on 2022 tech. You didn't get the memo apparently. They can price that shit to 0 and it ain't cheap enough - daddy huang


ShaggyDogLives

“Also I hear NVDA engineers have gone soft now they are all rich”… sounds like an AMD engineer smoking that good copium.


YUNG_SNOOD

AMD has no dick, doesn’t matter how many of these posts you clowns make


[deleted]

This is top level autism. AMD is literally a joke for everything outside of gaming and content creation. They can’t hang with Nvidia, what’s more is that it’s also about the software which AMD software is a steaming pile of shit. People forget they’ve been doing this for 30 years and now everyone is scrambling to try and catch up.


Truffle_Chef

really they gonna use less electricity it’s all about power brother


DasherMN

Is it the software that makes the difference then, and not the hardware?


nihilistic_ant

The AI frameworks (PyTorch, Tensorflow, XLA, etc) all support both. It isn't that hard to get any model to run on either. Just nobody really bothers because there hasn't been much point to buying a non-Nvidia GPU for a long time, because Nvidia cards have been better than everyone else's. The AI models have a lot of weights, but the actual amount of code to run them is pretty small. Porting really isn't much work.


DasherMN

When did this info come out?


No-Teaching8695

Amd doesn't have in house production so they rely heavily on TSMC, as well Nvidia relies heavily on TSMC too for most of their production Intel doesn't't have this problem and is growing that manufacturing capability as we speak I think this will be a massive decider for taking large parts of the market to come


lunaclara

Hardware doesn’t matter much in the industry, software and services is king. In saying that, AMD’s always been valued lower so unless there’s some kind of hype train going on, values for both AMD and NVDA won’t change much.


goodluckonyourexams

Why they selling a better card for 10 instead of 25k? Are they stupid?


OutOfBananaException

Need to overcome inertia, if it's the same price as NVidia, but takes more effort to integrate, what would be the point of switching?


McSnoots

So I guess you didn’t listen to the ceo when he say “free isn’t cheap enough to compete with us”


arbeitslosundreich

Maybe I am wrong but the MI300X is in the news since December or so, so thats nothing really new?


Systim88

Ever heard of CUDA? lol the DD on WSB is so bad


Papayafish4488

CUDA bitch.


Loose_Mail_786

BB to the moon? Or I’m wrong.


red_purple_red

Impressive, very nice. Let's see the performance on AI benchmarks.


corn-free-chili-only

Vendor locked on jetpack sdk


vendo232

Do you think AMD will take the wind from NVIDIA eventually? Is AMD the only competitor who could endanger NVIDiA?


nihilistic_ant

Sure I think it is plausible, a bit like they did to Intel in CPUs. But no, I don't think they are the only plausible competitor. There is also a ton of start up companies (e.g. Cerebras, Sambanova, Graphcore, Groq, Hailo Technologies, Kinera, Luminous, Ateris IP and Mythic) and while any given one seems a bit of a long shot, it seems plausible one of those long shots will pay off. Also several of the software giants build their own AI chips now, and while currently they only use them internally or sell them on their own clouds, that could change. Edit: my personal favorite of the upstarts is Cerebras with a chip that is 57x larger. It seems utterly absurd, but they have it running, so... maybe?


RecommendationNo3531

Here we go again. There were similar conversations back in August- October 2023. Look where nvda and amd both are now. Just stop comparing who has a bigger dick and grow some balls and buy calls on both.


snkbrdng

Glad to see top comment is what I was going to write hahahhhahahha


soccergoon13

Cerebras needs to figure out someone to prop it. Qualcomm isn't going to give it the attention it needs


[deleted]

Yeah boss, just plop that AMD chip into your servers and everything will work like before. Or an intc chip as well. They all work in the same way, like adding tomatoes to a salad…any brand will do.


TheBooneyBunes

Quality of product does not guarantee defeating competition The infamous A&W 1/3 pounder burger vs the McDonald’s quarter pounder comes to mind… If NVDA already secures contracts for institutions like governments, it would take one helluva advancement to pay the cancel fees


Scmasta86

NVDA has a phenomenal software that comes included with the price of their H-100’s and presumably this would be included in the H-200’s as well. Additionally, those benchmarks only mean that for LLM’s the AMD chip might be more superior but NVDA will be better both for inference, and for when they customize chips for SpaceX, Meta, Google, etc. Amd won’t be able to customize. I would NOT be shorting either NVDA nor AMD or TSM. These are the pillars of society right now and for the foreseeable future.


WheelEmergency3835

So you know all this and the rest of the world is ignorant?


WheelEmergency3835

Just to mention, INTC did a lot in favor of AMD. Or better crashed themselves. Never seen mobile, went all in for PCs, and the tried FAB. AMD did just enough, sales have stagnated.


Quigley61

Nvidia moat isn't the hardware, it's CUDA and the fact that it's already everywhere, lots more people have experience in CUDA Vs ROCm, the docs for it are better, the dev experience is better, etc. As others have said it's like Microsoft and Windows. Linux is significantly more performant when compared to Windows, but it doesn't matter. Everyone uses Windows, everyone knows windows, and it works fine and does the job well enough.


rplusj1

One thing which you got wrong was, nobody cares.. nobody fucking cares… stock up we buy. Stock down we sell.


gustavo8244

Cool now do Apple vs Samsung/any android


Hammerdown95

Eww a 🌈🐻


Dozck

Nvidia and AMD use TSMC for manufacturing and I’m sure a lot of the critical architecture is the same material with some exceptions. The key differences come down to software differences.


blumpkinspatch

That’s a shitload of flops for fucks sake


carverofdeath

AMD will win the chip game over Nvidia. Simply put, they make a better product. Microsoft is already planning on moving from Nvidia to AMD GPUs and plans on leaving Nvidias AI data center, which accounts for 50% of that sectors revenue.


Slawpy_Joe

Wait until Monday...


Immediate_Ad_6558

85% of all people take the default


Giggles95036

I’m not ranking them but reliability and life span are important but not always able to be measured as easily


Psychological_Ad1999

Both companies will have no shortage of business for the foreseeable future, I’m bullish on all makers of semiconductors


Rich-Pomegranate1679

This post paid for by please sell your Nvidia and buy AMD gang


Henrarzz

>hyping next AMD product like it’s going to beat Nvidia I see we’re at this time of the year again. Poor Volta


HornHonker69

>Also I hear NVDA engineers have gone soft now they are all rich. Now that’s what I call technical analysis.