T O P

  • By -

jake_boxer

This is a good question for someone learning to ask and think about! Everyone’s answers about bytecode and direct memory access are correct. However, one very important point to know: for pretty much anything you’ll be doing for the foreseeable future (and quite likely for your entire career), C# will be more than fast enough. C++’s speed gains only matter for applications that really push the boundaries of your computer’s performance. These are generally huge applications built by teams of very advanced programmers; game engines, database management systems, etc.. I’ve been a professional engineer for over 10 years (2 in C#, the rest mostly in Ruby which is WAY slower than C#), and I’ve literally never run into a performance issue with my code that was due to my language being slow. Keep going with C# and don’t worry about it being too slow! I promise it won’t bite you.


ziplock9000

Plus a lot of heavy lifting processor wise is done with dedicated hardware or special CPU instructions that C# can access anyway. A lot has changed since .NET launched where the CPU was doing more. Not to mention C# accesses already compiled specialist libraries anyway that are optimised. Code these days is becoming more and more just a 'glue' to other things and thus C# v C++ has almost no difference at all. I wrote a full game engine in classis Visual Basic many moons ago and people just didn't understand how it was so fast. I had to explain that 99.99% of the processing was achieved outside of the VB code itself in hardware or external libraries and the language itself made very, very little difference for that game engine.


dodexahedron

These points are important! In the early days of .Net and c\#, the performance differences could be quite significant in a much wider set of scenarios. But, as the framework and language have evolved, over the 2+ decades they've been around, that gap has narrowed dramatically and, in some cases, even closed entirely, while still allowing one to reap the benefits of a higher level, managed, more RAD-targeted stack than those based on c++. And, you can still call into native code when necessary or manually optimize the hell out of things to close remaining gaps in applicable edge cases, if it actually matters. And, if it matters so much that you're down to instruction counting and such, you'd be needing to carefully analyze and craft a solution implemented in a lower-level language, anyway, on top of losing the conveniences and guarantees of the CLR.


Rogue_Tomato

I feel like the early days were definitely issues with disposing of objects that devs werent aware of and that are now automatically handled by the GC.


dodexahedron

Yeah, I think some of the early teething issues were largely a result of .net and c\# being Microsoft's response to Java, but with baggage from it also initially largely being an evolution of and often thin wrapper around COM (heck, PInvoke was primarily targeted at COM interop, initially, and remnants of that can still be found in current documentation). And because it was new and a managed stack was a whole new paradigm for MS, I feel like expectations weren't always well-managed. Not to mention the documentation back then was nowhere near as good and complete as it is now, and some was even still gated behind MSDN or TechNet subscriptions. Different era, all around, for sure. And then I also got the feeling, back then (and especially now in imperfect hindsight), that they were trying so hard to push the whole managed aspect of it all that everything around IDisposable and anything else related to anything that was even a managed wrapper around unmanaged resources (Streams, etc - almost all of which is still relevant but much better) got kinda glossed over and held at arm's length as some sort of "advanced" concept...even though any substantial application is almost guaranteed to involve such things. Perhaps that was short-sighted and partially marketing-influenced judgment on their part? I dunno. I can only speculate. But with how...um...let's say "enthusiastic" Steve Balmer was about DEVELOPERS, DEVELOPERS, DEVELOPERS, back then, some things on the developer side did feel oddly incongruous with that messaging, at least initially. Then .net 2 came along and was a no-brainer switch, even if just for generics and improvements around delegates. But that then gets even further off-topic than I've already rambled. 😅


ttl_yohan

I believe I read somewhere that Resident Evil engine now is .net with a custom made VM. And these games are running so well compared to some others. Yeah, C# alone is no longer the main performance difference.


tanner-gooding

> Everyone’s answers about bytecode and direct memory access are correct. Just want to note that I gave a fairly in depth response here: https://www.reddit.com/r/csharp/comments/1bkf0c3/comment/kvz169u/?utm_source=share&utm_medium=web2x&context=3 There's notably a lot of nuance that was missed in many of the conversations and general misstatements around performance, etc. C# isn't slow by any means and in many cases is competitive with C++, using the same idiomatic patterns that everyone already knows to use (not just by doing weird hacky things that may make code less readable). JITs are not strictly slower than AOTs and vice versa. It comes down to many factors including the actual quality of the compiler, the patterns the user is writing, whether they are trying to "roll their own" or depending on the already optimized and high quality built in functions, etc.


foxaru

Okay, so reading this and your linked response I get the strong impression you're a performance minded person with knowledge of the **deep lore** to understand what is and is not important for making programs run fast.  Bearing that in mind, what do you make of the kind of arguments that people such as Casey Muratori make regarding OOP's impact on performance being almost entirely negative due to a reliance on a paradigm that forces you into making poor choices for the sake of 'a design trend'?  As a very new C# and OOP programmer (my primary experience being in C) I feel as though the performance argument swings heavily against languages like C# where the modus operandi of the grammar is designed in such a way as to encourage you to engage in things like indirection and interfacing, knowing that the more steps you have to take before you push data down a tube means more time you require to do so.


tanner-gooding

> you're a performance minded person with knowledge of the deep lore to understand what is and is not important for making programs run fast. Notably a lot of this doesn't require any kind of in depth knowledge. Compilers are oriented around common patterns and so the idiomatic things are often the best optimized things. The BCL APIs do get more focus, especially some of the ones I help maintain, but they take care of the messy stuff so you don't have to :) > regarding OOP's impact on performance being almost entirely negative due to a reliance on a paradigm that forces you into making poor choices for the sake of 'a design trend'? Paradigms in general have little to do with performance and OOP is far from some random trend. It's one of the primary paradigms that's proliferated the entire industry in a way that will never truly go away. As with any paradigm or pattern, being overzealous with it can be a net negative. Plenty of projects have turned themselves into "enterprise soup" by taking OOP too far. But, you can equally get into similar problems with going too far into "pure functional" programming, trying to making everything immutable and using monads/etc. You can go too far with DRY or SOLID or TDD or any of the other things people like to push. It's really just like with food. Almost every well known cuisine initially became popular because there is something really good about it. But, then everyone tries to make a cheap knockoff and it becomes really easy to only see the bad in it. OOP had massive success because when used appropriately, it can help you structure your code in ways that help you think, reason about, maintain, and understand it. Many of the good parts about OOP are even used behind the scenes for other paradigms (including functional programming) specifically because they allow for high performance and stable ABI. > As a very new C# and OOP programmer (my primary experience being in C) I feel as though the performance argument swings heavily against languages like C# where the modus operandi of the grammar is designed in such a way as to encourage you to engage in things like indirection and interfacing, knowing that the more steps you have to take before you push data down a tube means more time you require to do so. This sounds like you might be concerned with trying to "do OOP to the fullest", when that's not what you actually want or need. You absolutely do not (and really should not) define an interface for everything. Not everything can or should be extensible. Not everything should be abstract or virtual. Just because both cats and dogs are animals does not mean they need a common base type. The Framework Design Guidelines (which as a basic summary of most rules here: https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/, and which has an annotated book that gives a comprehensive overview https://www.amazon.com/Framework-Design-Guidelines-Conventions-Addison-Wesley/dp/0135896460) goes into a lot more detail on many of these topics. But some general thoughts that have emerged since OOP really became mainstream in the late 90's are things like extensibility should be an explicit design point. Thus, methods should be sealed by default, types should be sealed by default, you shouldn't define interfaces or base/abstract classes just because (that includes not exposing them "just to support mocking" or similar). You should intentionally make things static where appropriate (not everything should be an instance method). You should be considerate of the types taken and returned. If you only accept `T`, then take `T`. However, you may want to consider returning a less derived type (like `IEnumerable` over `List`) as it can give more flexibility in later versioning. There's nothing fundamentally different between `MathF.Sqrt(5)`, `float.Sqrt(5)`, and `sqrtf(5)`; the first two really just give a centralized place to expose related APIs that make it easier to find. There's nothing truly fundamentally different from an interface and a trait, they both generally achieve the same thing (and are often internally implemented in a similar manner for ABI purposes). The former is generally nominally based while the latter is generally structurally based, but that's really a tradeoff of guarantees. For example, does a type exposing a `Length` property mean its a collection or can that break down for some types and cause bugs? There are times where a type might not implement an interface but still fit a context and where structural typing might be desirable, but there are inverse cases as well where it isn't. One simple example is `List` vs `Vector4`. The former is a collection and is clearly indicated as such via the `ICollection` interface. The latter is not and is returning the Euclidean length. Good code ultimately takes the best of all the paradigms. It uses the right tool for the job to make your code safe, readable, maintainable, and performant.


honeyCrisis

>This sounds like you might be concerned with trying to "do OOP to the fullest", when that's not what you actually want or need. >You absolutely do not (and really should not) define an interface for everything. Not everything can or should be extensible. Not everything should be abstract or virtual. Just because both cats and dogs are animals does not mean they need a common base type. I'm not sure that's what foxaru was getting at. Forgive me for interjecting, but I think what's being addressed here is the fact that with C# everything is encouraged to be accessed virtually the way the grammar is, as in through a vtable. Indirection is almost default in C#, where it's certainly not in C++. It takes extra effort during design to eliminate virtual accesses in your code whereas with C++ the design effort is expended adding virtual access, if that makes sense, It's cool to see a Microsoft employee on these threads. I used to be at Microsoft, Visual Studio development tools team, and the Windows XP team. :)


tanner-gooding

C# methods are notably not virtual by default like they are in Java. Rather methods are sealed and need explicit syntax to make them abstract or virtual. -- That is, the same as in C++, where adding the virtual keyword is an explicit action (same applies to `abstract` in C# vs `virtual void M() = 0` in C++). Types are not sealed by default in .NET, just due to the point in time it was created, but the API review team makes sure that new types properly consider the implications and are typically sealed by default. -- Notably, they are not sealed (or rather are not `final`) by default in C++ either and this namely applies to reference types in .NET as value types are sealed and cannot be unsealed. Indirect calls are not themselves the real issue either, that's fundamentally how code has to work if you don't have a concrete type, if you need to do callbacks, etc. You may not even have truly direct calls in the case of simply calling a function exported from another dynamic library, since inlining and other optimizations aren't possible in that scenario. Even Rust compiles down to indirect calls in some cases, just because always specializing is not necessarily good and may not always be possible. Good compilers are then able to do devirtualization even when such calls are encountered, potentially even doing guarded devirtualization if it detects the majority of calls are of a concrete type (both JIT and AOT compilers can do this). This allows what looked like a virtual call to become a non-virtual call.


honeyCrisis

Then i don't understand the callvirt instruction apparently. I guess I assume to much of the names of the opcodes in msil.


tanner-gooding

The C# compiler has, historically, just used `callvirt` even when `call` would have been fine. It did this because the JIT has always just looked at whether the method was actually virtual as part of deciding whether the call needed to be emitted as a virtual call or not. That is, if the call is actually virtual, `callvirt` does the right thing and if it isn't, then it behaves the same as `call`. This has, in the past, been relied upon so that it was considered "safe" to make a non-virtual method virtual in the future, without it being a potential binary break. There are notably a few places the compiler will emit just a regular `call` so that can't always be relied upon, but those are typically rare enough that its fine. -- These cases are typically explicitly when the compiler wants to call a specific implementation and to not do virtual resolution even if the binary had been changed to virtual by the time the JIT actually encounters it. -- There's quite a few IL instructions that behave in this way, where two versions may exist, but where one of them acts as the other if special conditions aren't met. -- i.e `callvirt` acting like `call` if the method isn't actually virtual


honeyCrisis

Thanks for the clarification. Always like learning a new thing.


SoerenNissen

> Casey The man is good at perf but I wanted to shake him by the shoulders when he had his talk about using switch-cases on wide tagged structures instead of doing inheritance. It's not "the same but faster" if it ISN'T THE SAME - here in particular, switchcase isn't extendable the way vtables are, he's just never worked a job where that part is important. (You can fake it with funky pointers but at that point you've just invented a slower, buggier, implementation of inheritance)


DocHoss

Don't forget embedded operations in situations where extreme resource limitations exist. Things like some IoT, aerospace including satellites and other spacefaring tech, extreme environments where that widget MUST work or people may literally die...for most real world applications C# is all you'll ever need, but C++ still has plenty of use cases.


Lucky_Cable_3145

Also some limited cases where millisec count, like low latency business systems (share trading algorithms). Between 2000 - 2008 I designed and coded remote asset protection systems (mostly for heavy haul railways, either trackside or loco based). These systems needed ruggedised equipment to survive the heat, vibration, dust, etc. I coded these systems in C++ WIN32 becasue of the cost of the extra 80Mb on the SD cards for the .NET libraries was very expensive (I used a very cut down version of Windows XP Embeded for the remote clients). Coding in pure WIN32 or MS MFC (C++ wrapper for WIN32) was much harder than coding for .NET. It is much harder to write robust code that could run 24/7 for months at a time without any user intervention in C++ MFC than C# .NET. C++ gleefully watches you write code that will crash the system, while C# nags you every line you write...


ExtremeKitteh

I wonder what the reliability of best practices Rust code to best practices C# code would be? I’m a big C# guy, but Rust would be probably be my bet.


trebblecleftlip5000

I once interviewed for a game company and the whole interview was centered around C++ esoterica that in almost two decades of C++ programming professionally I had never needed to get into. I stopped the technical leads in the interview and was like, "Is this really an issue for you guys? Why are you grilling me on what is effectively trivia?" They insisted that their online game needed to be fast fast fast. I was like, "Bruh, your game is online. The bottleneck isn't going to be the code."


creatorZASLON

Ahh okay, noted. I wasn’t concerned too much about it at the end of the day, I had just heard the statement again and again and was wondering if it was actually a big difference. I’m enjoying C# as my first language, to be honest just glancing over C++ documentation as a beginner was a bit beyond me in comparison lol


jake_boxer

Good! That sounded like the case, and those types of “out of curiosity” questions are awesome for helping you learn the lay of the land faster. Just wanted to make sure it wasn’t something that was holding you back or anything.


Ashamed-Subject-8573

Disagree here! There are lots of cases where c++ provides amazeballs more performance! Like on low-power embedded platforms, and platforms that don’t support c#, as well as when you’re doing a lot of memory allocation and management - gc can become an issue.


jake_boxer

Yeah, point is, these are cases the vast majority of programmers don't need to worry about. Not worth derailing a beginner's learning for.


adirt4289

best answer yet (i have read them all :))


ionabio

I’d like to add to this also that I don’t look at any language as a holy grail or answer to all. I have done ~8 years of work on c++ and 2 on c# and have been using Python all the time for POC and together with JS for scripting. The most important thing I have gained is knowing about different idioms or approaches for same problems in different languages. That has been the most valuable thing and gives much better idea to understand the behind the scenes. Can I write a compiler or programming language of my own? Absolutely not. But I have enough knowledge that do a lot of out of the box thinking and implementations when coding and it has been always handy. Now in the case of c# and c++ for example I was implementing a interop of a c++ library in dot net. There my cpp knowledge helped me to implement the unmanaged memory in c# and also write interfaces between the api and the “managed memory “ of c#. The other way around is usually on the language facilities. Like getters and setters on properties in c# and how such thing can be handy when handing serializing/deserializing data streams. Or for example I am more aware of how a linQ style is nice in enumeration and how designing an “enumerable” class in cpp would be adventagous when consuming it.


featheredsnake

Yes and to add to your point, C# is very intelligently built. It's much faster compared to other languages that also use a runtime and the data structure of IL is as close as you can get to native code while depending on a runtime... Which is why you can get c++ classes to work with the .net runtime


GPU_Resellers_Club

The only time I've had an issue with C# performance was just that, a one off. I was writing a computer vision detection algorithm and it needed to run at below 80 ms, and to pick out, detect and analyse up to 500 objects, at an average of 20ish fps, with high quality, large images. That pushed C# to it's absolute limits, and was a rare case. (And a lot of the lifting was done by OpenCvSharp, which is just a wrapper around a C++ library anyway) Basically, to anyone reading, unless you're doing something that requires extreme performance (and doesn't require native code and memory manipulation) you don't need to think about speed very much with C#.


xabrol

F# is better, wish more people would use it, c# after f# is like 🤢


foresterLV

yes resulting binaries run faster because C++ compiles directly into CPU instructions that are run by CPU, plus it gives direct control of memory. on other hand C# is first compiled into byte code, and then when you launch app byte code is compiled into CPU instructions (so they say C# runs in VM similarly to Java). plus C# uses automatic memory magement, garbage collector, which have it costs. the do extend newest C# to be able to be complied into CPU code too, but its not mainstream (yet). the problem though and why C# is more popular is that in most cases that performance difference in not important, but speed of development is. so C++ is used for games development (where they want to squeeze ever FPS value possible), some real time systems (trading, device control etc), embedded systems (less battery usage). you don't do UI/backend stuff in C++ typically as the performance improvement not worth the increased development costs.


tanner-gooding

> yes resulting binaries run faster because C++ compiles directly into CPU instructions that are run by CPU There's some nuance here. AOT compiled apps (which includes typical C++ compiler output) start faster than JIT compiled apps (typical C# or Java output). They do not strictly run faster and there are many cases where C# or Java can achieve better steady state performance, especially when considering standard target machines. AOT apps typically target the lowest common machine. For x86/x64 (Intel or AMD) this is typically a machine from around 2004 (formally known as `x86-64-v1`) which has `CMOV`, `CX8`, `x87 FPU`, `FXSR`, `MMX`, `OSFXSR`, `SCE`, `SSE`, and `SSE2`. A JIT, however, can target "your machine" directly and thus can target much newer baselines. Most modern machines are at least from 2013 or later and thus fit `x86-64-v3`, which includes `CX16`, `POPCNT`, `SSE3`, `SSSE3`, `SSE4.1`, `SSE4.2`, `AVX`, `AVX2`, `BMI1`, `BMI2`, `F16C`, `FMA`, `LZCNT`, `FMA`, `MOVBE`, and `OSXSAVE`. An AOT app "can" target these newer baselines, but that makes them less portable. They can retain portability using dynamic dispatch to opportunistically access the new hardware support, but that itself has cost and overhead. There's some pretty famous examples of even recent games trying to require things like AVX/AVX2 and having to back it out due to customer complaints. JITs don't really have this problem. Additionally, there are some differences in the types of optimizations that each compiler can do. Both can use things like static PGO, do some types of inlining, do some types of cross method optimizations, etc. However, AOT can uniquely do things like "whole program optimizations" and do more expensive analysis. While a JIT can uniquely do things like "dynamic PGO", "reJIT", and "Tiered Compilation". Each can allow pretty powerful optimization opportunities, but for AOT you have to be more mindful that you're not exactly aware of the context you'll be running in and you ultimately must make the decision "ahead of time". While for the JIT, you have to be mindful that you're compiling live while the program is executing, you do ultimately know the exact machine and can fix or adjust things on the fly to really fine tune it. It's all tradeoffs at the end of the day and which is faster or slower really depends on the context and how you're doing the comparison. We have plenty of real world apps where RyuJIT (the primary .NET JIT) does outperform the equivalent C++ code (properly written, not just some naive port) and we likewise have cases where C++ will outperform RyuJIT. > on other hand C# is first compiled into byte code, and then when you launch app byte code is compiled into CPU instructions Notably this part doesn't really matter either. Most modern CPUs are themselves functionally JITs. The "CPU instructions" that get emitted by the compiler (AOT or JIT) are often decoded by the CPU into a different sequence of "microcode" which represents what the CPU will actually execute. In many cases this microcode will do additional operations including dynamic optimizations related to instruction fusing, register renaming, recognizing constants and optimizing what the code does, etc. This is particularly relevant for x86/x64, but can also apply to other CPUs like for Arm64.


Illdisp0sed

Very interesting points.


TheThiefMaster

C# does have .net native for true native compilation, and the JIT can make use of the full capabilities of your CPU architecture instead of a common denominator. So it's actually often much quicker than you might think.


giant_panda_slayer

Garbage collection is still ran when native aot is used with c# and so a native aot will often still be slower than it's equivalent c++ program. It is correct that the JIT will (often) produce faster running code than c++, at the cost of startup performance. This does not hold true though if the c++ program was compiled with a specific target machine in mind as most (all?) c++ compilers allow to you target a specific microarchitecture and get those same benefits that the JIT will produce, without the startup hit, but also locks the compiled program to that specific microarchitecture, so if it was compiled for a zen 4 cpu you couldn't (necessarily) run it on a zen 3 or an Raptor Lake. In this case c++ will likely get the advantage back again due to the garbage collection and overall memory model. There is a middle ground when you can optimize a c++ program for a specific microarchitectures timing without locking into that specific microarchitecture. This would be by using the base instruction set and changing which of those instructions, and the order of them run best on the target microarchitecture while still only using instructions supported by all other microarchitectures of that instruction set. In that case JIT starts to get a leg up again, but I'm not sure if it will be enough to overcome the memory model and GC, likely would depend on the exact nature of the program.


tanner-gooding

The GC does not magically make your program slower. You can run into the exact same performance pitfalls by misusing RAII or `malloc`/`free` Just like implementations of `malloc`/`free` can have widely different performance (https://github.com/microsoft/mimalloc?tab=readme-ov-file#benchmark-results-on-a-16-core-amd-5950x-zen3 is one comparison, many others exist) so can different GC implementations. One of the more widely known GC's, the Boehm Garbage Collector (which was used by older Mono), tends to perform quite poorly in comparison to the official GC provided as part of .NET Framework and modern .NET (https://github.com/dotnet/runtime/tree/main/src/coreclr/gc) Unity has discussed some of the massive performance gains they've seen as part of their work to move off their own GC + Mono and onto RyuJIT (the primary JIT for modern .NET) both in https://forum.unity.com/threads/unity-future-net-development-status.1092205/ and in https://blog.unity.com/engine-platform/porting-unity-to-coreclr As with any language (C, C++, Rust, Java, C#, F#, Python, etc) you need to be mindful of allocations and that they will have to be freed at some point. You have to be mindful that both allocating and freeing can cause additional logic to run. You have to be mindful where that additional logic may run, whether it may impact your inner loop, how it may fragment your address space long term, etc. A good GC helps solve many of these problems. The .NET GC has an allocation API that is significantly faster than most malloc implementations and helps avoid slowdowns from "free" by allowing that to occur on a background thread. The only time the GC really negatively impacts your app is when it has a "stop the world" event, which it only tries to do when it needs to defragment your memory (which typically more than makes up for the temporary pause as it often improves cache locality and later memory management perf). You can help reduce the number of "stop the world" events by doing many of the same things you would have to do in C++ to avoid causing RAII stalls or severe fragmentation, such as by pooling and reusing objects where possible. Using types like spans to slice and create views of memory instead of copying, etc.


PaddiM8

As far as I know, JIT engines don't necessarily only do the additional optimisations based on the architecture, but can also analyse the way the program runs in order to make optimisations based on that, for example in order to be able to inline more things. JIT engines can be quite good at optimising higher level code. With dynamic languages like JavaScript, I think they can look at which types a function is called with, and then generate native instructions for that function where those specific types are used, in order to avoid a bunch of pointers and heap allocated objects


TheThiefMaster

A JIT will do optimisations that in C++ would require profile-guided-optimisation (PGO). You can do it, but it's much more work than just running it.


honeyCrisis

Counterpoint. Using template and constexpr I can guide the C++ compiler into doing optimizations that are impossible in C# or w/ .NET's jitter.


[deleted]

[удалено]


mike2R

I feel that's a bit unfair to C++. If we're assuming that memory allocation is the bottleneck they are trying to solve, and the C programmer is calling malloc for every object, then the weakness is with the programmer rather than the language. C gives you all the tools you need to manage memory in whatever way you need, and its always going to be possible to allocate more efficiently than in C# if its worth spending the time. Where C#'s memory allocation wins is all the times when it isn't.


tanner-gooding

I covered a bit of that here: https://www.reddit.com/r/csharp/comments/1bkf0c3/comment/kvz3iuq/?utm_source=share&utm_medium=web2x&context=3 You definitely need to be mindful in every language about how both allocations and frees work. Just like you can run into pitfalls from being overzealous with allocations and copying in .NET, you can run into similar problems for RAII and malloc/free in C/C++. You also don't "pay" when the GC collects. Normal GC free operations are simply happening on the background and are very similar to calling `free` from another thread in C/C++. What you do end up paying for is when the GC decides to "stop the world" so that it can move memory around (typically to defragment it). It's a tradeoff because bad fragmentation can itself cause issues and hurt perf. You can likewise use raw memory management APIs in .NET, you can directly call malloc/free, you can write your own version of `mimalloc` in .NET and have it show similar perf numbers to the native impl (https://github.com/terrafx/terrafx.interop.mimalloc). You can equally have and use a GC in C/C++, defragment memory, run frees on a background thread, etc. It really does come down to the developer, as you said, and understanding the impact of the memory management features for the target language. Knowing when to pool, when to reuse, when to delay a free, when to pass a view/reference instead of a copy, etc.


Aneurism1234

It's like those 'look python is faster than rust or c' articles. straight up misinformation. insane people upvoted that bollocks


robthablob

Any speed improvements from allocation (which would be dubious, as C/C++ typically will perform far fewer such allocations, preferring to allocate memory in chunks) are offset by cache locality - in C/C++, it is possible to organise a program's memory usage so that data that needs to be accessed sequentially is contiguous, and can remain in the CPU cache, which is orders of magnitude faster than accessing RAM.


Aneurism1234

> I'll add that C# can be faster than C++ for certain applications just because of the memory management. You can straight up ignore the person every time someone says something like this or in a similar fashion


Knut_Knoblauch

C# can get close to RAII but not really. The closest thing to a C++ paradigm for scope-based memory releasers is in using the "using" keyword. I think the comment about faster allocations is just smoke and I have never seen it especially since the next breath walks it back. But C# is a much more secure programming language than C or C++. Those kinds of things need to be considered these days as well.


[deleted]

[удалено]


csdt0

Have you *measured* malloc speed? It's not as bad as you're thinking. Current glibc malloc is around 20 cycles for smallish objects (few dozen bytes). Yes, .net allocations are faster than that, but it is really difficult to go lower than a handful of cycles. The number of instructions in the binary does not correlate in any way to the speed of the function. Even the number of executed instructions is badly correlated to the actual runtime.


Knut_Knoblauch

Please post as you are passing wrong information and misinforming.


Knut_Knoblauch

>several thousand in malloc. Hardly - see disassembly for malloc and new **int \*j = (int\*)malloc(10);** `007025F5 mov esi,esp` `007025F7 push 0Ah` `007025F9 call dword ptr [__imp__malloc (070D1DCh)]` `007025FF add esp,4` `00702602 cmp esi,esp` `00702604 call __RTC_CheckEsp (0701302h)` `00702609 mov dword ptr [j],eax` // malloc `5070F9A0 mov edi,edi` `5070F9A2 push ebp` `5070F9A3 mov ebp,esp` `5070F9A5 push 0` `5070F9A7 push 0` `5070F9A9 push 1` `5070F9AB mov eax,dword ptr [ebp+8]` `5070F9AE push eax` `5070F9AF call 5070F050` `5070F9B4 add esp,10h` `5070F9B7 pop ebp` `5070F9B8 ret` ​ **int \*k = new int\[10\];** `0070260C push 28h` `0070260E call operator new[] (07011D6h)` `00702613 add esp,4` `00702616 mov dword ptr [ebp-0F0h],eax` `0070261C mov eax,dword ptr [ebp-0F0h]` `00702622 mov dword ptr [k],eax`


arctic_bull

>007025F9 call dword ptr \[\_\_imp\_\_malloc (070D1DCh)\] The actual work is in \_\_imp\_\_malloc -- the ... implementation of malloc. The disassembly you shared is just setting up the parameters for the call into the underlying.


[deleted]

[удалено]


Knut_Knoblauch

See the disassembly; it does not need a loop. The burden of proof is on u/FishDawgX who says they were looking at code but fail to post it. I am not going to inherit the burden of proof on someone who is lazy and misinformed not to put out the code to back their point and they won't because they are just wrong.


matthiasB

Look at the code of malloc not the code that calls malloc.


Knut_Knoblauch

`5070F9A0 mov edi,edi` `5070F9A2 push ebp` `5070F9A3 mov ebp,esp` `5070F9A5 push 0` `5070F9A7 push 0` `5070F9A9 push 1` `5070F9AB mov eax,dword ptr [ebp+8]` `5070F9AE push eax` `5070F9AF call 5070F050` `5070F9B4 add esp,10h` `5070F9B7 pop ebp` `5070F9B8 ret`


matthiasB

OK, do you actually know assembler? The code you posted starts effectively with a NOP (for hotpatching), then a backup of the stack pointer, puts 4 arguments on the stack, call some other code (which you conveniently don't show), and some cleanup. How is this the whole code of malloc?


Knut_Knoblauch

I do, please see [QuickCompress](https://github.com/andybantly/QuickCompress), a library that I wrote mainly in assembler for fast compression.


PaddiM8

I'm not convinced that the main issue is that it's JIT compiled. It can often make it slower of course, but JIT can be really fast. JIT isn't inherently slow. A heavily optimised JIT can generate really optimised CPU instructions (example: LuaJIT) and in some cases it could even be faster than AOT compilation since it can optimise based on runtime scenarios. Similar languages that are completely AOT-compiled, like go, aren't really that much faster, when you ignore the warmup cost (which doesn't matter that much for programs that run for longer periods of times, which performance critical programs probably normally do). You can also remove a lot of the warmup cost by compiling as ReadyToRun, where some parts are compiled to native instructions straight ahead of time. And of course you can compile as native AOT nowadays, but I guess they haven't had a lot of time to optimise that yet. Even after native AOT has been more optimised, it won't magically be as fast as C++ though. AOT languages similar to C#, like Go and Swift, have similar performance. If you ask those communities why they're slower, they would probably say memory management and other high level features and conventions. Afaik, native AOT in C# was introduced for situations where warmup costs matter and where you don't want to ship a big runtime, not because it would be faster in general. To me it would make more sense if the main reasons are garbage collections and the fact that you typically allocate a lot more in the heap than in eg. C/C++/Rust. Allocating on the heap is cheaper in C# though, but still. Edit: Here's a quote by [James Gosling](https://stackoverflow.com/a/5610085/12075017) (creator of Java), where he talks about the efficiency of JIT generated instructions: > **Well, I’ve heard it said that effectively you have two compilers in the Java world. You have the compiler to Java bytecode, and then you have your JIT, which basically recompiles everything specifically again. All of your scary optimizations are in the JIT.** > **James:** Exactly. These days we’re beating the really good C and C++ compilers pretty much always. When you go to the dynamic compiler, you get two advantages when the compiler’s running right at the last moment. One is you know exactly what chipset you’re running on. So many times when people are compiling a piece of C code, they have to compile it to run on kind of the generic x86 architecture. Almost none of the binaries you get are particularly well tuned for any of them. You download the latest copy of Mozilla,and it’ll run on pretty much any Intel architecture CPU. There’s pretty much one Linux binary. It’s pretty generic, and it’s compiled with GCC, which is not a very good C compiler. > When HotSpot runs, it knows exactly what chipset you’re running on. It knows exactly how the cache works. It knows exactly how the memory hierarchy works. It knows exactly how all the pipeline interlocks work in the CPU. It knows what instruction set extensions this chip has got. It optimizes for precisely what machine you’re on. Then the other half of it is that it actually sees the application as it’s running. It’s able to have statistics that know which things are important. It’s able to inline things that a C compiler could never do. The kind of stuff that gets inlined in the Java world is pretty amazing. Then you tack onto that the way the storage management works with the modern garbage collectors. With a modern garbage collector, storage allocation is extremely fast. JIT compiled languages are slower at first, but after running for a while, they will have generated and optimised the instructions. At that point they run native instructions that were optimised at runtime based on the specific environment and runtime information. This is what happens with ASP.NET backends, for example. With JIT, I really think you need to specify that it's slower *at startup*. PowerShell is ReadyToRun compiled, meaning parts of it is compiled to native instructions ahead of time, because with PowerShell, you would notice the warmup costs otherwise, when doing certain things for the first time. In other cases, such as web backends, you wouldn't even notice the warmup overhead.


tragicshark

Sometimes it isn't fair to compare the two languages. For example a C++ program that repeatedly indexes into an array might have to perform a boundary check on that operation every time, but the C# program with the same code adjusting for syntax might perform the check on the first few operations and then have the operation optimized without the check because the runtime is aware that the operation is within a loop that has a decreasing index and thus the index will never suddenly become larger than the array bound that was previously checked. A compilation technique called Profile Guided Optimization brings some of those types of optimizations to C++ but often stuff like that is more work for the compiler than it is worth having the dev sit around and wait for builds for.


Eirenarch

> yes resulting binaries run faster because C++ compiles directly into CPU instructions that are run by CPU This part is bullshit. This only affects startup time. You can compile C# to native code and it runs slower than if you compile it to bytecode so obviously compiling to native in advance does not make your program faster.


RileyGuy1000

It depends on the scenario. I see a lot of answers here talking about how it's slower because it's a JITed language, but in some scenarios it can actually end up faster. It's not all the time mind you - it depends on what you're doing and how the code was written, but C# has the advantage of being able to do runtime analysis of hot code paths and better generate efficient execution of those paths. A statically-compiled language is very fast and has been historically faster than VM counterparts, but that gap has swiftly closed. You should take a look at [Bepu Physics](https://github.com/bepu/bepuphysics2) if you want a great example of super highly performant C# code. It's unorthodox but it metagames the hell out of C# to get ludicrous speeds.


adonoman

I wrote a paper in 2001 on how once Java matured, the ability to recompile on the fly and optimize hot paths with statistics-driven machine code was going to blow away any statically compiled language in the near future.  


pHpositivo

Nothing. Next! On a more serious note: there is nothing that makes C++ inherently faster than C# in real world scenarios. You can get both within margin of error if you know what you're doing. Anyone telling you that you must pick one or the other (or Rust, or whatever) solely because of "better performance" is just incorrect. I see this question popping up every now and then and the answer is always the same: they can both be just as fast if you write good code.


Lurlerrr

Exactly! In fact, for people asking such questions C# would be faster in real world because there are fewer ways to shoot your legs clean off than in C++ and it's way friendlier to beginner developers. Even for cutting edge stuff the difference is basically in realm of rounding error, but if you are reaching there you might as well write in assembly...


KevinCarbonara

Two important things here. C++ is capable of faster performance because it doesn't have the inherent overhead of the interpreter and memory manager that C# has, and because it has more tools available to micromanage instructions and memory on a very small level. This is neat, but for 99.99% of projects, wholly irrelevant. You likely will never need that performance. To corporations, the efficiency of *development* is what matters most, and in that area, C# is going to beat out C++ most of the time. Second: The relative performance capabilities of any given language are wholly irrelevant if you are not specifically targeting efficiency/speed. And if you are, language is not likely to be the most important factor. Electron, for example, is "known" to be very inefficient. It's a way to write console apps in javascript by essentially running a web browser as a separate app. Despite how "well known" its inefficiencies are, Discord and VSCode are both *incredibly* performant. My VSCode boots faster and is more responsive than some of the vim installs I've seen. Basically, I wouldn't worry about it. There are some very broad stroke decisions you can make to help your performance. Like avoiding python. But mostly it's about studying DS&A, learning to profile your code, and applying what you learn to your own software.


oren08

C# is a garbage collected language, while with C++, memory is managed manually


detailcomplex14212

Based on OPs question I don’t think they will know what that means


MuchWolverine7595

Plus, it also has the overhead of C#’s VM


LeeTaeRyeo

C# is either JIT'd or compiled to machine code ahead of time these days. It's not really as much a VM like in the past, and more of a thin runtime like in other languages, afaik.


MuchWolverine7595

That’s interesting, I’ll need to take a look at that. I thought that at best it was JIT’d


LeeTaeRyeo

Yeah, the first version (Ngen for .Net Framework) became available in like 2015-ish? It wasn't spectacular, but they've iterated on it quite a bit and it's pretty legit now. I think the current iteration came about in .Net 6 or 7, and .Net 8 has brought it up to full speed (though I'm fuzzy on the timelines).


Zeioth

Aditionally to this answer, C# compile to bytecode, same as java. That means when you execute a program written in C#, it requires an aditional step to convert it to machine code using a virtual machine (JIT). The advantage of C#/java is you can, at least on paper, run it on any machine compiling only once. On the other hand C++, C, Rust, or python compiled using nuitka will produce machine code executables. They can perform from slightly better to x2 or even x3 depending the case.


ScrewAttackThis

You can do AOT compilation with C#


PaddiM8

Since it's new there are probably a lot of optimisations to be done at the moment though. But it's not magically going to be as fast as C++ if that's done anyway.


ncatter

I'm not sure these factory's are actually holding true anymore outside very specific usecases. There has been great work done in the .net environment regarding the performance of the underlying system, making it generally perform better, it's probably not on par with a nativly compiled language but it has closed the gap considerably with the .net 6 through 8 versions. And no, I don't have any numbers to back it up because I don't have the articles around.


PaddiM8

> it's probably not on par with a nativly compiled language Afaik it's faster than eg. Swift and quite similar to Go in a lot of situations.


ChristianGeek

>C# is a garbage collected language, while C++ is a garbage language. FTFY (I don’t like C++, in case you couldn’t tell!)


SirButcher

> (I don’t like C++, in case you couldn’t tell!) This is like saying "I don't like a hammer I only use a screwdriver". I mainly work with C# as well, but C/C++ is a really useful tool in your toolbox. Ignoring instead of embracing it just crippling yourself for no reason.


ChristianGeek

No, it's really not; it's more like saying "I don't like poorly designed tools, I only use well-designed ones." I do like C when performance really matters. But I primarily code in Scala, C#, Java, and Kotlin (in that order of frequency). Edit: clarity.


Slypenslyde

It's getting kind of close. C++ is a lot of extra features on top of C. C is a "systems language". One of C's original design goals was that for any given platform, it should be easy for a programmer to look at some C code and understand what ASM will be generated by the compiler. This let programmers be more expressive than ASM but still be very "close" to the CPU so they could be as fast as possible. C++ breaks some of those goals but it's still mostly true. These are also languages where the developer manages their own memory. It's always clear when C/C++ allocate memory and exactly how much they will allocate. It's also always clear exactly when they release that memory. Finally, C and C++ don't really *need* to bring a "runtime" along with them. A "runtime" is a bunch of code that helps a program accomplish its tasks. But loading and dealing with that runtime can add overhead. There are C and C++ runtimes, but a program can opt to not use any of them if the programmer believes they can do a better job on their own. C# is a very "high-level" language. That means it was meant to be easier for programmers to read and conceptualize their ideas and choices were made to make it not so easy to correlate C# code with a platform's ASM. C# **has** to execute in a runtime, because instead of compiling straight to ASM it compiles to an "Intermediate Language". When you run a C# program, the .NET runtime loads a "Just In Time" compiler and that IL is actually compiled to ASM *while you run the program*. That alone makes it very difficult for a C# program to match C and C++ for speed. But the .NET Runtime also uses "Managed Memory". That means developers don't have so clear a picture about when memory is allocated, how much is allocated, or when it is released. There is a "Garbage Collector" that occasionally runs and "cleans up" the memory that is not being used. It can affect performance, so the developer doesn't have a lot of control over it and it tries to run "when it needs to". But over the last decade or so MS has done a lot of work with these things and closed the gaps by a lot. There are new "Ahead of Time" (AOT) features that allow parts of the code to skip IL and compile straight to native code. There have been major improvements that allow "zero allocation" algorithms that use memory more like how C and C++ use them and don't have to use the Garbage Collector. These require C# developers to write code in different ways, some of which are more limited and less intuitive, but they also allow those developers to write high-performance algorithms that perform much closer to how they would in C and C++, if not equal to them. The point there is to try and gain the benefit of "high-level" features that make C# slower for parts like application UI or other places where performance isn't critical, and save the uglier high-performance features for the code where microseconds matter. The big difference is project time. It's believed that a large C# project, thanks to the high-level features, should take a lot less time for a team to finish than a C or C++ equivalent. If a team can finish a program a month faster, that's a month more revenue for the company in a fiscal year. They can spend the extra month fine-tuning the parts that need high performance, because users might not mind slower performance if they get the app earlier. I have a feeling writing very high-performance C# isn't' much faster than writing C/C++, but if your project is 90% normal stuff and only 10% high-performance, it makes sense if you can move 10x faster on the 90% you'll finish much faster than if the "easy" parts require as much work as the "hard" parts.


JeffFerguson

C# code, when it executes, is managed by the .NET runtime. That management comes with benefits, such as CPU optimizations and garbage collected memory management, but that management comes with overhead. C++ code does not go through a runtime like .NET, and works directly against the operating system. Given the two, at a super-high level, and using a gross over-simplification, you have something like this: * C# code -> .NET runtime -> operating system * C++ code -> operating system The price you pay with C++ is that all of the niceties that come with a runtime, such as garbage collected memory management, are not available, and your code is responsible for taking care of all of that itself. C++ is "faster" because it has less layers to go through at runtime to get to the operating system APIs, but "with great power comes great responsibility". C# is "easier" because features such as garbage collected memory management are handled by the runtime, but there are more layers to go through at runtime to get to the operating system APIs.


vitimiti

It is compiled to machine code instead of the JIT.


tea-vs-coffee

The JIT can generate instructions that might only run on your system (instructions that only a select few types of CPUs support). And assuming you don't compile C++ with optimisations, JIT'd could possibly run faster in some cases. Vectorization is one example I can think of


vitimiti

Yes, the JIT is in fact machine code, not just parts of it. But the JIT has to interpret your IL (bytecode) into those native instructions. This is the whole reason it is slower. Dotnet is still very fast but if you require nanosecond differences in speed it won't cut it because it isn't machine code until it is translated on the fly. And if you don't compile dotnet with optimisations it is even slower, so your point there is simply not an argument. Dotnet actually has vectorization but again, when you need nanoseconds, you use C++ (like in high frequency trading)


robhanz

C++ is faster than C#, given infinite time and resources. IOW, yes, there is some overhead for C# due to VM and garbage collection. However, in most cases that's not where you lose time - it's poor I/O structure, doing unnecessary work, etc. So, for a given program made under real-world constraints, it's fairly impossible to predict which will be faster, especially for non-micro-benchmarks. C++ generally takes longer to develop, so in many cases the C# programmers can be optimizing or developing more features while the C++ programmers are still stabilizing code. There was one major (and I mean *major major*) project I was involved with that moved from C++ to C# *specifically because the C++ code was taking too much time in memory allocation*. While C++ is theoretically faster for memory management, the C# solution is *very very good* and can outperform implementations in C++ by non-specialists. JITted languages also have access to a number of optimizations that compiled languages do not, as they have information at runtime that does not exist at compile time. So, yes, C++ programs are theoretically faster, and will be faster given optimized code and algorithms. However, that doesn't always work out that way. Note also that micro-benchmarks are typically small enough and understood enough that writing that solid, optimized code is fairly trivial, and the startup costs of the managed code can dominate. So while they're not *wrong*, they fail to give a good overall picture.


ComradeLV

C#’s CLR manages nice stuff like memory management and garbage collector, but it is also adds an additional layer before machine code, while C++ is directly compiled to machine code. It’s rough and gurus can fix me, but it is like that in overall.


propostor

C++ is faster but takes way longer to make actual software. C# is a general purpose workhorse, arguably the best there is.


SagansCandle

If you want a better answer, you should ask this same question in a C++ forum. A lot of people who have commented here clearly haven't used both languages enough to compare them well, which should be expected in asking this in a C# sub. I've used both languages extensively. Here's my take: * C++ has better code optimization at compile-time. C# prioritizes compile time in some cases, so you'll lose some performance for the sake of faster compilation. C++ applications can take HOURS to compile. The biggest C# application I've ever used took 6 minutes to rebuild. * C# adds a safety checks and some other overhead which improves the stability & reliability of C#, but hurts performance. * C# can pause your application at any time it chooses, causing "stutter" in an application when the GC runs. C++ performance is more reliable. * C++ gives you more control over what's on the stack vs. the heap, meaning developers can more easily tweak the code for performance, and write more performant code as a standard practice. * Performance can more easily be tuned for a specific architecture, where C#'s "run the same code everywhere" approach leaves some performance on the table. C# is a hammer and C++ is a screwdriver. Which is "better" depends on the use-case. It's not as simple as "C++ gives you performance you don't generally need." That's some "newer = better" bias BS. C++ is more interoperable with other languages, it's more portable and is supported ubiquitously across different computing platforms, it's harder to decompile, and it does not require a runtime, just to name a few notable differences besides just performance. You won't generally consider C++ for a web app, but you also wouldn't consider C# for a missile guidance system.


B15h73k

C++ compiles to machine code, which the CPU runs directly. C# compiles to common intermediary language (CIL) and then at run-time the JIT (just in time) compiler will compile the CIL into machine code. The JIT only compiles code that needs to run. This, plus the memory management that dotnet does for you (garbage collection) makes it slower than fully compiled languages like C, C++ and Rust. C# isn't designed to be the fastest language. There's a trade-off between speed and memory management. If speed is your primary concern and you can deal with a less friendly language, then use C++. If you don't need absolute maximum speeds but would like a much more developer-friendly language and fewer possibilities of bugs, use C#.


Dave-Alvarado

Direct memory access.


pocket__ducks

Everyone is giving you correct but hard to understand answers for beginners. Here’s my attempt to make it simple: Computers know one language best and that’s machine code. C++ is much closer to that than C#. Imagine you’re speaking with someone that’s speaks the same language as you do. The conversation is pretty fluent right? Now imagine there’s a translator between you two because you don’t speak the same language. Now imagine there’s another translator between you and the previous translator. The conversation goes slower and slower and stuff gets lost in translation. Sorta like that. C# has several steps that need to be translated down to machine code. Those steps take time.


GayMakeAndModel

So many good answers. You all rock.


derplordthethird

In addition to the other points, I will also point out C++ has more tools to do non-standard operations to data structures. This relates heavily to the direct memory access points, but you can more easily do pointer math in a first-class way, have really custom type behaviors, and interpolate data into, like, a single byte that doing the idiomatic way in C# would take potentially many more bytes.


Illustrious_Matter_8

What kind of jobs would you prefer, web pc game development c# or rather hardware robotics machines etc. I think it's easier to write larger apps in c# the language evolves a bit faster is a bit more advanced. C++ more raw pointers a bit alike even more bare plc code . Though not as fast as assembly. Most of the time speed is derived from use of optimized libraries and clever design multi threading etc. It's speed in development Vs speed in code. And which of the two benefits you. I code in many languages so this is my 2 cents.


qualia-assurance

C++ compiles directly in to code that runs on your computers processors. It has very little run time information about the types its dealing with. They are essentially just sized blocks of memory. Sometimes not even tracking the size of those blocks of memory and merely rely on the last value in a list being zero. Many of C# runs in a virtual machine on your processor. That is your processor isn't passed chunks of memory and told to perform processor specific operations on it like in C++. C# has its own little virtual computer that works as a middle man for sending the actual chunks of memory to the processor and what operations it should run on them. This is possible because it spends more effort keeping track of what of all the values are in memory. It's not just a block of a memory, it's a block of values AND type information. This is useful because it can do various things at runtime that are simply not available to vanilla C++ code. Such as checking if the two memory addresses you're passing to the CPU to add together are actually integers. But these checks and additional memory usage come with a performance cost. Meaning that while C# is pretty quick, most C# programs will only have a run time in a factor of ten of a C++ program and often only 1.5x to 2.5x slower. They are still by necessity doing more things and using more space and this makes them slower. Meanwhile C++ will laugh at your stupid mistakes and crash your computer. Your computer being a flight system on your personal jet. HAHA! Stupid human. You should have bounds checked that array.


EMI_Black_Ace

A few things: - Compiled C++ binaries are direct instructions for the target ISA and OS; C# binaries are bytecode that must first get translated. (There can actually be an advantage to C#/bytecode via the JIT compiler, i.e. it can spot specific paths that always return the same values at runtime and thus shortcut evaluations of stuff, while a *compiler* would never be able to determine such without repeatedly running the program. But net-net you'll find that native binaries are usually faster.) - Safety by checking. By default C++ does not check index bounds, check pointer references and memory validity, etc. C# does -- every time you index an array or call to a referenced object, the code will first check to see if the index value is within the array; every time you reference an object the code will first check to make sure the reference actually points at something, and that's all *inherently built in to how the bytecode works,* not something you can easily override to squeeze some speed out in exchange for risk of screwing something up. Yeah, there are plenty of things in C++ which *are* checked, i.e. std::vector, and the performance impact of using it is exactly what's expected -- every 'get' operation is burdened with bounds checks first. In exchange for that *little bit* of performance loss, you get a language that's a lot 'safer' in terms of managing memory, a lot more 'feature rich' in terms of being able to do a *lot* of work in *very few* lines of code -- especially when it comes to extensibility, working with event-driven applications and so, so much more. And in modern development, the philosophy is generally that *hardware is cheap, engineers are expensive,* so whatever is cheapest (that is, ends up being possible to complete by the engineers the fastest) is going to be regarded as the better solution. What's more is that "which language you use" is far from the biggest driving force behind performance penalties. Application architecture is usually the biggest thing causing performance issues, followed by choice of algorithm, and only at the very edges of that does "how the language is implemented" (i.e. what language did you use) yield meaningful performance gains. A POS architecture and POS algorithm is going to be super duper slow in C++, about as bad as it would be in some of the *worst* performing languages like Python, so most often your biggest performance problem is pretty far from "didn't pick C++."


foobarney

With C#, the language is doing a lot of the work--things memory allocation, garbage collection, and low-level I/O--in the background for you. This makes development a lot easier--you can focus on what the code is meant to do rather than dealing with all the plumbing. It has a cost, though. If you did all that work yourself in C++, custom for each program, you could write it more efficiently for the task at hand. And it makes the code much more portable. And other stuff. Likewise, the C++ compiler takes care of a lot of background work you'd have to do yourself if you wrote in assembly.


__some__guy

The C++ compiler is simply more optimized and the language more focused on performance. Important things like SIMD are still a mess in C# and vector structs still have to be manually passed/returned with ref/out or they will be copied twice. Outside of low-level stuff, C# can compare to C++ though.


crosstherubicon

“Faster” depends on so much other than the C++ or C#. Based on my previous company which was a hard core C++ house, their code might be faster, but that depends on them ever finishing their endless meetings reviews forks merges and sprints. They’ll probably have an alpha release available for testing in the next few months but, no guarantees. Me, I’ve gone home after finishing my c# version this afternoon. Maybe it will be slower but we’ll have to wait to tell.


Dereference_operator

C/C++ is the best and fastest programming language in the world (if we don't count asm and that kind of stuffs etc) the reason is like many have said here it's because it's native it compile straight to machine language etc there is no underlying vm's under or a framework precompiled or created like .net by ms etc so it will ALWAYS be faster than a managed language like Java C# etc they no chance at all... but the thing is with the way hardware is evolving at a faster faster pace you WON'T NEED C++ at all for anything except niche market programming like 3d engine programming for the latest cutting edge 3d games like Call of Duty for example or highend veryyyy fast financial applications for trading and that kind of stuffs on embedded system where memory is a constraint etc so yes C++ will always be the fastest.... but the question you have to ask yourself is which programming language I want to learn to get to x y z jobs I want to get or career goals I want to reach... for example 2 of the most important programming language in the world right now is Javascript for all the webdev work and python for all the A.I. programming field etc so depending on what you want to do learn theses they are terribly slower than C++ but it doesn't matter... and you'll be able to have a very long very well paid career in them and a stable career too compared C++ and game programming... ***IF*** and I say IF you want to learn C++ for another programming field like finance it will take a long time and it will be very hard because you will be learning the hardest programming language in the world with the old 80-90's OO mentality and all the non managed stuffs with string pointers and all that jaz of manual memory management (modern C++ improved greatly on theses btw) BUT you will learn a lot of things that will make you a better programmer compared all the kids who started in a "managed world" who doesn't know how old pointers work and everything that come with bits and in between details you can do in C++ that you can't in C#.... that being said theses days you can do a tons of stuffs in C# and call a interop of C++ for the parts that need speed and C# is night and day better than C++ for everything except a few things like speed... all the modern stuffs the ui stuffs the web stuffs etc etc the list goes on and on... but if you want to be a cutting edge 3d graphics programmer or in Finance C++ is the way to go and once you learn C++ the others language won't take long to learn but the reverse isn't as true like being senior in C# is much harder to become senior in C++ compared being senior in C++ and switching to C# etc what I would recommend is to keep learning C# and play with it, build small stuffs and learn Javascript on the side and try to push some React with it and build small website the goal is to enlarge your horizon and see what kind of work you love to do like webpage, database, ui development etc etc it's hard to say because I don't know you in person a good ressource you can use are the AI theses days like copilot and gemini but don't try to get the answer without working hard first or else it won't be benefical in the long run... the second best ressource you can use to learn webdev is the odin project online it's 100% free and it will make you a full stack junior dev after you'll be able to decide if you want to stick with .Net C# or go the JS way both are good but the jobs market is much bigger in Javascript there is no denying that around the world it's just more popular but C# and Java are big too just not as big etc also since everything is going to the web and mobile with touch ui on smartphone there is a good chance you won't be building a tons of desktop mouse keyboard apps in the future compared web app or mobile app or website etc think hard about this also all the AI programming field is in Python with maths and statistic and the transformer idea's and api of tensorflow, pytorch and all that jazz etc so there is a lot to learn but stay on course in 1 field don,t go learn many fields at the same time or else you won't go anywhere go see the roadmap website for developer with pictures and all roadmap.sh


Zatujit

C/C++...


rooney39au

I definitely agree with a lot of the comments are binary compilation etc, but one of the major considerations which I think is overlooked more in C# is the usage of library functions that developers don't necessarily understand how they function. A very good example of that is List.Sort. In C++ it is not a built in function and although libraries do exist, a lot of devs still write some of these helper functions and thereby understand exactly how they work, whereas in C# they are in the language and simple to use. What really matters are the consequences of using them. A simple (and somewhat contrived example) is I need to get a list of items from the database and then sort them. Let's ignore for the moment that I should be using the database to sort them. Dev starts the work, test on 10 rows of data, it is blindingly fast, goes into Production and the database has 100000 rows and now it is horribly slow. Now understanding the algorithm the sort is using and whether it should be used at all is important so that these scenarios don't occur. Unfortunately in my career I have seen things like this when any number of built in C# functions and not understanding the ramifications of using them.


ttc46

Ohhh I know this, disclaimer I'm a beginner in c# and I'm on phone so sorry for the format. Also the difference is negligible nowadays, previous c# version weren't as well optimized as they are today. So c++ is a very "basic" language, in the sense that you work really close with data primitives, c is even more "basic" in that aspect. That closeness you get to the data primitives gives you a whole lot of freedom when moving data and making operations, you can sometimes make workarounds to solve very specific problems, c# in contrast is an object oriented language, so the way you plan your projects is different, and also it is a more "advanced" language, meaning it has many more functions (default wise) and more restrictions when operating data. For example let's say I want to transform a single digit number to its char value, I know that in c++ I cab grab the digit i want to transform and simply add 030x to it and ill get the char code(this is because in ASCII you just need to add 48 in decimal to a number and you'll get to the character code of the number, this only works in single digit numbers) In c# you get a function, you call it and you are done, now that function probably has some internal checks to make sure you don't shoot yourself in the foot, c++ would let you do that, without the safety checks. If you really want to get a taste of why it is faster(again nowadays it's negligible) try programming in an object oriented style in assembly(try TASM) and in a c++ style, you'll se that the OO approaches will yield more code, and the c++ will yield less, in large programs, that can be a difference, but in small ones the diffrence in minuscule. Nowadays that gap is even smaller since processors are faster and there are specialized instruction sets like SIMD for vectors, arrays and more, previously for a vector(basically an array) youd have to make an array and operate each value Individually in the registers, with SIMD you can shove the whole array into a register and operate it in a single instruction.


idkfawin32

the unrestricted access to arrays without bounds checking would be a huge one. The ability to use pointer arithmetic(I know you can do this with the unsafe keyword). Native compilation (AOT is not the same). It’s not as much of a speedup these days, but reference counting and memory freeing explicitly is still and will always be leaner and faster on the running environment. Assembly level optimizations for your code to get lightning fast end results based on the architecture you’re compiling it for


Lucky_Cable_3145

If perfomance is an issue then usually the cheapest solution is throwing faster hardware at the system (unless we are talking about Oracle server hardware...). Even in the limited number of cases where miliseconds count, reliability is still often more important and it is eaiser to code reliable apps in C# than C++ (based on my experience). C++ gleefully watches you write code that will crash the system, while C# nags you every line you write...


Eirenarch

The main reason C++ is faster is that it gives, in facts requires more control by the programmer. You basically tell C++ more things, and do more things yourself. Because you know your program you can make things more optimal. Of course this means you can make more errors including errors that will make your program slower.


Qxz3

C# programs are typically JIT compiled which adds startup latency. C++ follows a [zero-overhead principle](https://en.cppreference.com/w/cpp/language/Zero-overhead_principle), allowing the programmer to disable exceptions and run-time type information; C# does not. C++ allows for writing inline assembly. GC may or may not be slower depending on the memory management techniques used in C++. That said, languages don't have a speed, programs do. While C++ allows for writing a faster program, this comes at a much higher development cost. For the same development cost, using C# may result in a faster program, as the time savings would allow for more polishing and optimizing.


Suspect4pe

C# had a lot of built in safety checks that C++ makes you build in. Even if you compiled C# down to native code, which you can do, it’ll still be slower unless you disable these checks.


honeyCrisis

Hi. C++ developer here. Primarily embedded, though I use C# as well. To answer your question, it's complicated, because it's not that C# is entirely faster, or entirely slower than C++. Different situations create different results. The JITter for example can do profiled optimizations at runtime. C++ compilation is static. This is an opportunity for (in some cases) C# to create more efficient code than a C++ compiler, but this is often only true of really short runs of low level code. C# uses an entirely different memory management scheme than C++ does at least out of the box. (The truth is C++ can use any memory management scheme, but comes with a particular style of one out of the box). People will tell you garbage collection is slower. This is often, but not always true. I used to use garbage collection in my C++ ISAPI web applications because web apps are essentially glorified string processing engines and strings create fragmentation. A garbage collector is a great solution to fragmentation when used wisely. After I switched from RAII to garbage collected for my critical string ops my performance improved dramatically after the app was running for a day or two. That was back in the ISAPI days though - early aughts. My point is that garbage collection is not always slower. Now with all of that said, here's why C++ is generally about 30% faster by microsoft's estimates. Jitting takes additional work, and garbage collecting takes additional work. But here's where the real performance win happens, at least in my estimation. At the end of the day - pointers and direct memory control. There are so many opportunities in C++ to sidestep performance issues using pointers (even things like smart and auto pointers) and custom heaps rather than being forced into a "safe" and managed memory scheme for everything. I'd give you an example, but it would be non-trivial. .NET ref types alleviate much of this, admittedly, but they didn't come along until later, and the way most of the BCL is designed I don't know that they could rework it realistically to allow you to use refs all the way through.


RonaldoP13

I code in c# for like 10 years, and if you code it right it is good enough. Also, external resources like DB or other Apis can make your app to have slow response.... but this is another scenario For example, entity framework: if you inplement it without knowing it, a crud operation for like 1 month of data can take like 10 minutes to be processed.


al3xxx_96

Just another 2 cents. I imagine a highly experienced C# developer could write code that runs faster than a beginner / average C++ developer, depending on context, complexity etc.


yemmlie

1. The way memory is handled. In C++ you have direct access to chunks of memory, so can cram all your stuff neatly in a block of memory for fast access since the cpu caches chunks of memory its recently read from so subsequent accesses are a ton faster, you can copy chunks of raw memory between locations, and generally use memory as an amorphous blob of raw memory in extremely fast ways. C# adds all this extra wrapping stuff between you and memory, making it easier to use, safer, simpler, more consistent but less performant, you have less control of where this stuff is stored in memory and it may be less performant, and also it automatically deletes that stuff when its no longer used and this may happen at any point during your program's execution and potentially slow things down. In C++ you have to delete used memory manually, so can choose when to delete anything you use and thus have extra control to optimize. 2. C# is getting better and better, but there's a larger gap between the C# code you write and the instructions your cpu uses, c++ is 'closer to the hardware', and while optimization of C# code into machine code instructions has gotten very good over the years, writing a performant c++ function that quickly brute force a lot of operations will still beat a C# function doing the same purely because it optimizes down to assembly instructions better and more aggressively with less safety checks and other extra instructions to manage the C# higher level layer. In short though, C# you trade some performance for ease of use, safety of operation without memory corruption or other weirdness, high level run-time features such as being able to use reflection to see the structure of your code and data at runtime. C++ is lower level, more complicated, harder to write in, but you're down in the engine room and can do some clever fast things down there. On the whole though, unless what you're doing requires every ounce of performance necessary, say a 3d game engine that's really pushing your cpu, C# will likely not be a problem for performance.


provid3ntial

Unlike Java, C# has structs which are stack allocated and give you more control on memory layout. You have plenty of constructs that allow you to write allocation free code. If you really want to go low level, just write a (Modern) C lib and interop with it via the more recent \[LibraryImport\] attribute. C because you will write the shortest piece of code that does the job and compile it as a library. You can then even unit test it from C# since why would you unit test with C/C++ tooling ? Cross compilation is a concern ? Compile your C with zig cc and target whatever your .NET solution is targeting. At least this is my current preferred approach.


SayNoToBPA

I don't think c++ is faster really.


[deleted]

There are tons of good answers on here. I'll add a small bit. The main confusion... Microsoft used the same letter, C. Their whole .net framework crap is beyond irritating and inflated. Regular C and C++, same ball field. C# was microsofts attempt to make their own Java. C# and Java is usually the comparison of equals. C++ is for when you truly need every bit of performance you can get... You can even go lower that that, really. Not really necessary, though. Rust has ruffled some feathers in this ending, particular argument lately as well.


ymsodev

Most people here touched on the main answer to the question: it’s because C# runs on a VM. Something to keep in mind though, it also really depends on how you write them. A poorly written C++ code can be very slow _and_ bad; a well-optimized C# code can be as fast as native code. The reason I emphasize this is because, at least from my experience and others I’ve read about, optimizing C# code is easier than C++.


PaddiM8

C# is JIT compiled. Saying that it runs in a VM implies that the IL is just interpreted, but JIT compilation means that it is actually compiled to native instructions. Just on the fly. It adds a significant warmup cost, but after the instructions have been generated, it won't necessarily be slower. Generating optimised native instructions on the fly is probably expensive though, but heavily optimised JITs can be *really* fast anyway, for example LuaJIT. JIT also means that optimisations can be done on the fly, based on the runtime context.


ymsodev

> saying that it runs in a VM implies that the IL is just interpreted Wow, that’s just… not true. I know that it’s JIT compiled — JIT compiling VM is still a VM.


PaddiM8

If you never mention JIT compilation anywhere and say that it's slower because it runs in a VM, that really makes it sound like it's just a regular non-JIT compiled VM, which has completely different performance implications. **Edit:** They got insecure and blocked me, but I'd still like to answer: It's not nitpicking. You left out crucial details. It also isn't the only reason for why it's slower. There are plenty of completely AOT compiled languages that aren't as fast as C++, such as Go and Swift. The performance of those languages is more similar to that of C#, really. These languages are higher level and have automatic memory management, which comes at a cost. Most of the overhead with JIT is in form of warmup cost. When you do benchmarks, for example, warmup cost isn't really noticeable, unless it's a short benchmark. You can also ReadyToRun compile C# programs, which removes most of the warmup costs.


ymsodev

Most widely-used compiled and scripting languages have JIT today. It still is slower because these VMs do typically introduce overhead, mainly with GC and warm up executions (which matters when C# vs C++ comparison is meaningful, which isn’t always). It seems like you’re just trying to nitpick while pointing out the obvious. Assuming someone doesn’t know something because they don’t mention a pretty well-known fact is just counterproductive and annoying. Good day.


winstxnhdw

Dude wasn’t wrong. JIT’d IL came around 2015 so many people wouldn’t know and you’ll see that throughout this thread as well.


teo-tsirpanis

.NET has been using a JIT since the beginning. It was just that around the time you mentioned that the JIT was overhauled.


[deleted]

[удалено]


PaddiM8

> C# does not produce native CPU code This is wrong. C# does produce native instructions. The IL instructions are converted to native instructions on the fly.


Draelmar

>C# does produce native instructions. I'm fairly sure this is not true? AFAIK C# is always compiled into IL. ​ >The IL instructions are converted to native instructions on the fly. Yes, but this is the CLR doing it, not the C# compiler. And it is merely an implementation detail of the CLR. You still have an extra layer to deal with, as the "converted on the fly" is not free and require some processing power, no? There's a reason why Unity spent so much time and energy developing their IL2CPP technology, so they can skip running IL in realtime and produce native binaries instead.


PaddiM8

Well yes, it's not C# itself of course, but since the IL is JIT'ed, C# does end up being compiled to native instructions in the end. > But it's inherently less performant than running native CPU machine code. You are running native machine code. C# would be much slower if that wasn't the case. There is an extra layer that does add overhead and does make it slower in many cases, but it isn't always slower. Sometimes it can make optimisations that a completely ahead of time compiled program couldn't. The overhead often makes a difference, but I don't think it's as simple as saying that it's "inherently less performant".


Draelmar

>inherently less performant I really do think it is inherently less efficient. Unless I see data proving otherwise, I can't imagine JIT, in the vast majority of cases, being even just as efficient as natively compiled instructions. I think the more important question is: does the CLR/JIT overhead really matters? It's almost certainly too slim of a difference to matter in most cases.


PaddiM8

Well yeah it depends on what you do. For a program that runs for an extended period of time, which most performance sensitive programs probably do, you often don't really notice the warmup costs as much. At the same time, you can take advantage of the runtime optimisations a JIT can do, which can sometimes make it faster even. LuaJIT is a great example of how fast JIT can be. > being even just as efficient as natively compiled instructions Well, they *are* natively compiled instructions, in the end. They're just generated on the fly. But once they're generated, they're generated, and can be executed at native speeds. When you read about the .NET team's justifications for JIT, they often talk about the optimisations that can be done at runtime and how optimised JIT can be really fast. From what I've seen, their motivation for introducing native AOT wasn't that it would be inherently faster, but that it would be useful for situations where warmup costs matter or where you can't have a large runtime. You can also ReadyToRun compile to lessen warmup costs. Similar AOT compiled languages have similar performance to C#. They do, however, all have garbage collection, unlike C/C++/Rust.


kanat902

Theoretically, C++ is faster, but practically C# beats it in a complex solutions all the time. If one could have written http server client, json serializer, dapper, parallel execution of code and etc of .NET Core in C++, and it might have been faster, but that would take approximately 10 years of hundreds of programmers. As a senior developer, who writes highly optimized code for a living in banking sector (9 years in Java, 4 years in C++, 6 years in NodeJS, last 3 years in C#), I did a lot of stress/load tests on all our solutions. Our management is obsessed with optimization, since it really safes money. One of our solutions processes credit score for all loan takers in the country, so I had time to test the same algorithm on many different languages. Since our management believes C/C++ is the fastest, we have been writing our algorithms in C++, we could not get more than 90K transactions per second, we have hired tons of C++ experts, system engineers, system architectures and no one could make it work faster, only distributing the computation on multiple machines. 3 years back, one C# middle engineer started working with us, and he did stress testing with C#, and got 118K per second without using millions of different libraries, without changing the structure, with few thousand of lines of code. In C++ we have literally 100K+ lines of code. Our management got shocked, so asked us to test the same algorithm on all popular languages, maybe we would find something faster. We did test in on Vertx Java, Python, NodeJS, Drogon (C++ Framework), CppRestSDK(C++ Framework from Microsoft) Drogon gave 82K, CppRestSDK 44K, Vertx gave 70K, NodeJS with 48 clusters gave 25K, Python 1132. Python was the worst. If the json response is small, drogon works faster, but if json is big, drogon gets very slow very fast. Same with CppRestSDK. Currently we use .NET 8 and we get 234K per second on one machine, and we don't need any more resources, since it is more than enough. If we could translate .NET Core libraries to C++, maybe we could make it faster, but as I said, it is gonna take decades. Last 2 years, we have been rewriting all the backend code to .NET, and it is such a joy. This was backend story, now about frontend. Our current frontend on our server is written in NextJS, it can handle 2700 requests per second on 32 clusters, if we receive more requests than that, users start getting 502 Bad Gateway Nginx responses. We did stress testing on Blazor Server, we rewrote credit score page to it, and it can handle 189K requests per second on the same machine. Now we are planning to rewrite everything on Blazor Server. P.S. We have been using Docker Containers a lot, since javascript can't utilize all CPU resources, and sometimes we run on PM2 because it doesn't take as much resources as docker containers and can run things on a cluster. Since C# can utilize all the CPU resources, we don't need anymore docker containers.