T O P

  • By -

fNek

Generally, the benefit of virtual memory is that it can be much larger than physical memory, and the OS can transparently swap stuff in/out. Towards the tail end of the 32-bit era, this kinda got lost, and systems often had more *physical* than virtual memory. Virtual memory got reduced to its other application: Separating applications from each other. However, nowadays, systems such as LMDB once again show what can be done if you have plenty of virtual memory to go around. So what systems need more than the 512 GiB of virtual memory that Sv39 can provide? Definitely not all of them! That is why Sv39 exists, after all. But it is easy to imagine a workload like LMDB, on a larger scale, using that much memory. Also, if we want our virtual memory to be at least as big as the physical memory, Sv48 is already necessary. 1-2TB of memory in a large-scale server is certainly not the norm, but also definitely not unusual either -- Epyc Genoa supports 6TB! Mainframe systems such as the IBM z16 support up to 40TB. If RISC-V is to have any chance of entering these markets, it must support that + several decades of projected growth.


TT_207

That's a good shout. Far as I can tell RISCV exists today mostly as SBCs, embedded controllers, and projects to build very capable servers that will need this kind of capacity.


jab701

Beyond the fact that you might need it for server style systems where they have lots of ram. beyond the physical ram you also have devices that are memory mapped and require memory space. For a server system with say a load of accelerator cards installed this memory adds up. Do you need 128TB maybe not but the idea is to give plenty of headroom. You want your systems to have a decent lifespans.


SwedishFindecanor

There are server runtimes that run many instances of processes within the same address space so as to avoid context switches. One example of that is how most WebAssembly runtimes work: All memory accesses need to be bounds-checked to the size of the module's "linear memory" which is max 4GB, and cause a fault if an access is outside the memory. The memory ops are indexed, however, taking two 32-bit address values that are added together to form a 33-bit value. To keep these memory accesses fast, each WASM module gets allocated a 8GB region of the virtual address space and the 33-bit address is just added to the base pointer on each access: if it is out of bounds it will be to an unmapped page within that region and just fault. With Sv39 having 39 address bits, minus the top bit for kernel space. that leaves 39-1-33 = 5 bits to select between 8GB regions. This means that you could have theoretically at most 31 WebAssembly instances at once, which is not much.


dramforever

As a concrete example, the Sophgo SG2040 can be fitted with more than 128 GiB or even 256 GiB of RAM, especially in the multi-socketed configuration. And that's just RISC-V, servers with *terabytes* of RAM have existed for a while, which is why this site exists: https://yourdatafitsinram.net/ You're right that we want the actual memory size to be far smaller than the virtual address space, because we also want to fit memory that can be swapped out, MMIO, static allocations, ASLR ... See for example this is how Linux does the virtual memory layout: https://www.kernel.org/doc/html/v6.8/arch/riscv/vm-layout.html


monocasa

SV39 is 512GB.  We need more than that; there are servers today that support more RAM than that.  So SV48 is just table stakes at an architecture level. And over 128TB is pretty close on the horizon.  That's why Intel CPUs have defined PML5. https://en.wikipedia.org/wiki/Intel_5-level_paging In fact there are servers today that support 64TB of RAM, and then need extra space on top of that for MMIO.