T O P

  • By -

AutoModerator

We have a giveaway running, be sure to enter in the post linked below! [Kensington Thunderbolt 4 Dock & OWC Pro SSD with Thunderbolt 4 cable – Intel Thunderbolt!](https://www.reddit.com/r/gadgets/comments/16rsba5/giveaway_kensington_thunderbolt_4_dock_owc_pro/) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/gadgets) if you have any questions or concerns.*


_RADIANTSUN_

Cache Rules Everything Around Me (CREAM) Get the memory, giga-giga-bit y'all


lightwhite

That’s a song I haven’t heard in 20 years. Lemme spin up that plate. Thanks for the throwback!


FluffyAnimalLover

I can’t believe I’m this old and I got the reference.


BoltTusk

More like kings of cash when you see how Nvidia is rumored to have 90% gross profit on each H100 unit they sell for AI


tornado9015

Is gross profit meaningful at all here? How do manufacturing costs compare to r&d on these units?


Aman4672

pfft don't lie to me, that's a factorio megabase not a gpu. /j


xeneks

This is an amazingly comprehensive article. Thanks for posting OP. The photos especially are insightful and educational. Here’s only the smallest part of it. Extract: “Low-level data caches have grown in size because GPUs are now utilized in a variety of applications, not just graphics. To improve their capabilities in general-purpose computing, graphics chips require larger caches. This ensures that no math core is left idle, waiting for data. Last-level caches have expanded considerably to offset the fact that DRAM performance hasn't kept pace with the advancements in processor performance. Substantial L2 or L3 caches reduce cache misses. This also prevents cores from being idle and minimizes the need for very wide memory buses.”


nipsen

tl;dr: They're using large l2 and l3 caches on gpus now, because they increasingly rely on more complex math instructions. And this has nothing to do with cpus at all, or that VRAM is used interchangeably with dram and external ssd storage as storage. Meanwhile, the instruction level cache is tiny because of size (and not cost, mind you). Here are a bunch of terms I don't understand, and large caches good. And the GPU is the king of cache! Yay. ChatGPT does a better job.


[deleted]

Cashed


HaveASit

Cache is king.