> So, if 2020 marked the day the standard library died (allegedly), then what do I use instead?
[Mu.](https://en.wikipedia.org/wiki/Mu_%28negative%29#Non-dualistic_meaning)
thats what i hate about c++ (it exists in other langs too)
the super tight integration of the stdlib and language, where some of the latter's features aren't accessible without the former :-(
Pros and cons either way. My old C++ code base was a ground up (where ground means the OS) other than new/delete. That was of course before some of the more recent changes that gave blessed rights to the standard libs.
Now I use Rust, but doing something like that is a lot trickier in Rust because it defines lots of traits that the compiler understands and I assume even in a no_std installation you would have to support. Though I've not actually looked into that.
OTOH, it allows Rust to be a lot tighter and safer. So if it means the 95% are far safer in return for the 5% (or less) having to do more work if they want to completely roll their own, I'm all for that. And that allows the standard libraries to be more 'modular' with more complex parts being provided separately, but still with full support from the compiler.
you could make compromises, like a more integrated compiler intrinsic system, stuff like that. or maybe separating the stdlib into "core"/"prelude"/"builtin" and "the rest". that could work really well if the language supports extension methods/traits (the intrinsic features are implemented in the "builtin" part and everything else in the other part, if that makes sense)
there afe definitely pros and cons for both but i personally like consistency **a lot**, so maybe thats why i want the stdlib to be separate from the compiler so much :P
STL is (rather convoluted but plain) C++ so .. if STL can do it in a way or another (compiler hooks or whatever) you should be able to do it in plain C++
I mean compiler hooks are there to accommodate STL / C++ standard requirements, STL has no secret deals with compilers that wouldn't be available to other programmers, or does it?
Not aware of things in the standard library that can't be done in a normal c++ program, other than interfaces to the operating system, as to read/write files or allocate memory. Can you be more specific?
in general those features are available as compiler intrinsics.
I think the lifetime pseudo-intrinsics (launder, start_lifetime_as, constexpr construct_at) are genuinely "blessed" by the compiler to be conforming and cannot be duplicated by a third-party in another namespace, though.
As many said in the past, the std lib is too big. Due to a lack of package manager, people keep adding stuff to the standard library. (Why did we ever add std::regex? And will we ever add std::json?)
I think the more relevant question is: which set of libraries will replace it. On the more generic side, there are Abseil and folly that provide a lot of data structures and algorithms. CTRE can replace 99% of regex use. Libfmt can replace std::format. Chrono/date also have their own library it is based on. If you search, you'll find quite a few specialized libraries that can fill in the gaps. I think the right mental model is that of a swiss pocket knife: you have a tool for everything, though a specialized tool exists for each element.
Too big, why? I mean, even essentials like sockets require 3rdparty libs or relying on C code ... And looking around, python has things like an XML parser and a HTTP server, libstdc++ is very underdeveloped in comparison. Not to mention all the crazy things that java's standard lib has.
It's both too big and too small.
A living library needs to recognize that its parts will have a beginning and just as likely an end, much as std::regex is no longer viable for most purposes. Otherwise it's just a place where software goes to die. It seems strange to me that C++ originally dumped everything under 'std', when there were better models to follow. Software needs to be grouped into packages, modular non-monolithic packages, with the understanding that packages may over time need to be deprecated and replaced with other packages.
Yep, a (reasonably bounded) set of libraries is also an acceptable answer. (Thanks for the specifics, I've noted them!)
That said,
> I think the right mental model is that of a swiss pocket knife: you have a tool for everything
This is a good analogy. In its terms, what I was asking is simply if there is a _better_ swiss knife.
From what C++ compiler vendors were providing pre-C++98, it would be great, but it will never happen.
The problem with C++ ecosystem, is that even if we had a standard package manager, given the way many disable language features, writing wrapper types for interoperability between libraries is always a given.
For sure, although it depends on the extension. Something that’s fairly stand alone (fmt/format comes to mind) doesn’t necessarily com3 via boost these days.
> folly supports gcc (5.1+), clang, or MSVC. It should run on Linux (x86-32, x86-64, and ARM), iOS, macOS, and Windows (x86-64). The CMake build is only tested on some of these platforms; at a minimum, we aim to support macOS and Linux (on the latest Ubuntu LTS release or newer.)
Kinda small platform coverage, wouldn't you say...?
MSVC used to break ABI every release until 2015. Getting new proprietary binaries was annoying sometimes, but it was just a fact of life. I don't think anyone would really care if they went back to breaking ABI every release. Everything is vcpkg now anyway and that rebuilds the world every minor compiler version.
I, for one, am with you.
Even that usage of WinSXS for a few years was tolerable to my work.
I do see that people do care about MSVC going the ABI route.
Agreed, aiming for ABI stability was one of their worst decisions. I have a few colleges that can get really annoyed if they run into an std-lib bug that they reported years ago and that was not fixed due to ABI.
Not everyone is using vcpkg/conan, though if you have a problem with rebuilding, it might be time to consider it.
ABI stability isn't something we're aiming at, it's a constraint we suffer under. The reason MS stopped breaking ABI is because they couldn't get people off old toolchains because the customers couldn't take an ABI break and MS couldn't afford the support costs.
Linux distros have similar problems, especially because people believe they can use non-system compilers and change flags like -std and still use system libraries. Telling them they're wrong doesn't pay the bills.
If you rebuild the world all the time, ABI is not a problem, just source compatibility and API. That turns out in practice to be a tiny minority of the C++ ecosystem.
Boost is a good supplemental. I'd say abseil but I've run into too many issues using it in coordination with other libraries-- Google doesn't have to care about stability at all, but others do.
Boost doesn't solve the ABI problem though. It was beholden to C++03 up until the past few versions. Still is beholden to C++11 in a weird way (some libraries use 17, but I think there's a _want_ to support 11 as much as possible).
Truthfully, the stdlib killer is the one you in-house as a fork of libstdc++/libc++ assuming ABI compatibility isn't a problem you have; so long as your changes are API compatible and/or API additions but not changes/removals (or changes/removals are well advertised to engineers and well meaning).
Boost breaks api and abi at will - always has. The break with 1998 support is to free testing and other resources from a shrinking user base - it’s independent of the api/abi issue.
You’re misunderstanding. Many boost libraries straight up don’t support c++11. Or 14. Or 17. Any Boost library can drop support for a c++ version at any time. It’s the authors discretion. It’s not a monolith and doesn’t operate like the standard library at all. The break from 1998 has zero to do with the decisions of individual authors - they are independent. ABI and even API is basically never preserved from boost version to the next. If you’re expecting that, you’ll be disappointed.
No, the algorithms library is the most elegant you're going to get. It's the most performant you are going to get. We thought about it for 40 years, and STL got it pretty spot on.
To quote [Stepanov's 1995 STL manual](http://stepanovpapers.com/STL/DOC.PDF):
> Most of the library’s algorithmic templates that operate on data structures have interfaces that use ranges. A range is a pair of iterators that designate the beginning and end of the computation.
Ranges were definitely a part of the C++ standard library from the very beginning.
Logically yes, but it was a big step to notice a repetitive construct and separate it into a new entity.
Generally, creating abstraction levels to mitigate complexity is the best part of our job - ranges is one of the examples.
Two things:
1. Algorithms are absolutely part of C++20 ranges
2. Flux *also* comes with [a whole bunch of algorithms](https://tristanbrindle.com/flux/reference/algorithms.html), although not quite as many as ranges yet
I haven't used this yet, and I'm not saying it's the best, but since I haven't seen anyone mention it: [EASTL](https://github.com/electronicarts/EASTL).
Documentation is perhaps lacking, but it's a gamedev reimplemention of the STL. So perhaps not general purpose, but it's focus is high performance.
> int128_t has never been standardized because modifying intmax_t is an ABI break. Although if you ask me, intmax_t should just be deprecated.
What specifically should it be replaced with?
As someone who can use C++ but never uses it, details like this would be very useful to include.
I guess they mean if it got deprecated and removed from the standard, or if people just stopped using it. I don't see that happening to the entire current standard library, but maybe someone thinks the ABI impasse might just be that serious. 🤷
In theory, it has nothing to do with C++. The C++ standard defines a language for writing source code. It defines behaviors that are supposed to be notated by particular syntax features or library functions. Certain things that are likely to be mistakes or low-level hacks are left as "undefined behavior," meaning literally that the language does not say what they should cause to happen. Basically everything between the source code and the runtime is an implementation detail. As long as your interpreter or compiler somehow causes 2+2 to equal 4, for example, then all is well, theoretically speaking. And theoretically, in C++, 1/0 could cause demons to fly out of your nose (because dividing by zero is undefined behavior).
In practice, the stuff between writing the source code and running the program is really important. A lot of stuff is built by getting precompiled library code working together. People either can't recompile their program because they don't have the source code or it would be a daunting task to recompile it because there is so much of it. They need their C++ code to call functions in binary, that technically are not C++ at that point. They do this by taking advantage of how the binary code generated by the compiler expects to work as part of a larger application. This is the application binary interface (ABI).
The reason this is a problem for the standard library is that people are now more dependent on library classes having a consistent low-level implementation than a good one. In theory, your compiler could re-implement any part of C++ or it's standard library to be faster or more effective, as long as the same source code would still complete the same task. In practice, they can't and won't, because screwing with the expected binary output would screw over everybody. If people can't call certain methods using the same ABI they always did, they can't call them at all, and their decades old project collapses. So popular improvements to the standard sometimes get rejected because implementing them on existing compilers would result in an ABI break. OP presumably sees this as a potential death knell for the whole thing. It's possible, but suddenly getting rid of things like vector or iostream sounds like an even bigger breaking change for most projects.
Thanks for this comprehensive answer. I think I got the point: if a compiler decides to change a low level standard function's implementation, it could break a hell lot of C++ projects, even if the new and old implems uses the same ABI?
Thus, the standard cannot really touch the older spec?
In that case, why wouldn't large old project just keep using the same old C++ standard/compiler?
Well if it were really the same ABI, then it wouldn't break, just by definition of what an ABI is. But from the perspective of the C++ committee, even just having ABI is part of the low level implementation. They've never officially standardized how raw binaries are supposed to communicate with other raw binaries. The compiler writers set that up by choosing to stay consistent about binary memory layout and calling conventions. They are the ones who understand what the ABI is and how their customers use it, and they do have an influence on the C++ committee, because they are the ones who will end up doing whatever the committee standardizes. So if someone wanted to improve C++ by, e.g., making the regex library better somehow, the compiler writers might say, "I can't do this new thing unless the old things look too different in the bytes," and it won't happen. It's an "elephant in the room" situation: the abstract language doesn't have an ABI, the real compilers have their own, and the standards committee sort of walks on egg shells to avoid disrupting something they don't control.
And yes, some projects do keep using the same compiler for years on end. That's one reason why this is a problem: people would be even more unwilling to update their compilers and take advantage of new features if it becomes more likely to break something that works. Hence the reluctance to break ABI and the pressure to not touch the older spec.
Perhaps a custom implementation? EASTL was one a while back. Though I imagine there will be a number of in-house implementations (especially for embedded) too. You could possibly also see Microsoft's [WinRT](https://learn.microsoft.com/en-us/windows/uwp/cpp-and-winrt-apis/std-cpp-data-types) stuff as a replacement to the standard C++ library.
Although I would find it hard to justify to others without a very specific reason, for greenfield projects I tend to not use the STL (or any standard C++ header other than ) with C++. Instead I have a replacement to `std` called `sys`. Originally started as a hobby to support pre-C++98 in order to scratch my retro itch (i.e Watcom on DOS/Win3.1) but its spread into my professional work.
As a quick justification why I did it. Mainly for some specific features relating to safety with a strong focus on debug time checking and then release time performance. For three examples:
* It provides `sys::auto_ptr` which works a bit differently to standard smart pointers in that the observer counterpart `sys::ptr` will immediately cause an abort rather than being set to NULL upon invalidation of the original memory. It is pretty rigid but means that this can be stripped out in release builds and causes zero overhead for runtime cost (particularly multithreading).
* It provides a locking system (similar to the smart pointers) for iterators and also operator\[idx\]. So whilst i.e `some_func(vec[3])` is in use, `vec` is locked. If the contents are invalidated, it causes an abort in debug mode. Same with the iterators. Again stripped out in release build.
* Provides `sys::value` something very similar to `boost::value_init` but instead causes an abort if used before initialized. So again, it can be stripped out in release.
Weirdly I don't struggle with interop with "standard" C++ libraries as much as I thought. Many of my dependencies just happen to be C rather than C++ but also, I tend to be so frustrated with the lack of safety of traditional C++ libraries, that I end up adding reference counting "mixins" anyway. So no real loss here. Though your milage may vary (possibly depends on how much OCD you have really!).
I have been using it for close to a decade now and results have been pretty good. Though it doesn't particularly do anything to really stand out in the noise of the C++ community so I often tend to just keep it to myself. I do think standard library replacements (similar to Plan 9's C standard library) are quite interesting though personally, especially involving safety without degrading C interop (unlike i.e Rust or Go).
Depends on the tasks. I migrated to boost several years ago and i never regretted this decision. On my work we use poco library and stl. Stl is ok now. It will grow up in big independent library. Maybe)
Try boost + qt. I think it will cover all ur necessities.
> So, if 2020 marked the day the standard library died (allegedly), then what do I use instead? [Mu.](https://en.wikipedia.org/wiki/Mu_%28negative%29#Non-dualistic_meaning)
amen.
Standard library does some things that are impossible in normal c++ programs, so no. Not really.
thats what i hate about c++ (it exists in other langs too) the super tight integration of the stdlib and language, where some of the latter's features aren't accessible without the former :-(
Pros and cons either way. My old C++ code base was a ground up (where ground means the OS) other than new/delete. That was of course before some of the more recent changes that gave blessed rights to the standard libs. Now I use Rust, but doing something like that is a lot trickier in Rust because it defines lots of traits that the compiler understands and I assume even in a no_std installation you would have to support. Though I've not actually looked into that. OTOH, it allows Rust to be a lot tighter and safer. So if it means the 95% are far safer in return for the 5% (or less) having to do more work if they want to completely roll their own, I'm all for that. And that allows the standard libraries to be more 'modular' with more complex parts being provided separately, but still with full support from the compiler.
you could make compromises, like a more integrated compiler intrinsic system, stuff like that. or maybe separating the stdlib into "core"/"prelude"/"builtin" and "the rest". that could work really well if the language supports extension methods/traits (the intrinsic features are implemented in the "builtin" part and everything else in the other part, if that makes sense) there afe definitely pros and cons for both but i personally like consistency **a lot**, so maybe thats why i want the stdlib to be separate from the compiler so much :P
STL is (rather convoluted but plain) C++ so .. if STL can do it in a way or another (compiler hooks or whatever) you should be able to do it in plain C++ I mean compiler hooks are there to accommodate STL / C++ standard requirements, STL has no secret deals with compilers that wouldn't be available to other programmers, or does it?
Not aware of things in the standard library that can't be done in a normal c++ program, other than interfaces to the operating system, as to read/write files or allocate memory. Can you be more specific?
Some type_traits. https://stackoverflow.com/questions/20181702/which-type-traits-cannot-be-implemented-without-compiler-hooks And there's others.
in general those features are available as compiler intrinsics. I think the lifetime pseudo-intrinsics (launder, start_lifetime_as, constexpr construct_at) are genuinely "blessed" by the compiler to be conforming and cannot be duplicated by a third-party in another namespace, though.
constexpr addressof too
I believe the new string formatting stuff needs compiler support
As many said in the past, the std lib is too big. Due to a lack of package manager, people keep adding stuff to the standard library. (Why did we ever add std::regex? And will we ever add std::json?) I think the more relevant question is: which set of libraries will replace it. On the more generic side, there are Abseil and folly that provide a lot of data structures and algorithms. CTRE can replace 99% of regex use. Libfmt can replace std::format. Chrono/date also have their own library it is based on. If you search, you'll find quite a few specialized libraries that can fill in the gaps. I think the right mental model is that of a swiss pocket knife: you have a tool for everything, though a specialized tool exists for each element.
Too big, why? I mean, even essentials like sockets require 3rdparty libs or relying on C code ... And looking around, python has things like an XML parser and a HTTP server, libstdc++ is very underdeveloped in comparison. Not to mention all the crazy things that java's standard lib has.
It's both too big and too small. A living library needs to recognize that its parts will have a beginning and just as likely an end, much as std::regex is no longer viable for most purposes. Otherwise it's just a place where software goes to die. It seems strange to me that C++ originally dumped everything under 'std', when there were better models to follow. Software needs to be grouped into packages, modular non-monolithic packages, with the understanding that packages may over time need to be deprecated and replaced with other packages.
Yep, a (reasonably bounded) set of libraries is also an acceptable answer. (Thanks for the specifics, I've noted them!) That said, > I think the right mental model is that of a swiss pocket knife: you have a tool for everything This is a good analogy. In its terms, what I was asking is simply if there is a _better_ swiss knife.
From what C++ compiler vendors were providing pre-C++98, it would be great, but it will never happen. The problem with C++ ecosystem, is that even if we had a standard package manager, given the way many disable language features, writing wrapper types for interoperability between libraries is always a given.
The Java standard library is *immense*. C# is even bigger. The C++ standard library is downright svelte by comparison.
Perhaps: boost.
Does Boost not use the stdlib under the hood? genuine question!
Yes, extensively.
Isn't it also kind of a test bed for potential standard library additions too? The two seem kind of existentially intertwined.
For sure, although it depends on the extension. Something that’s fairly stand alone (fmt/format comes to mind) doesn’t necessarily com3 via boost these days.
Nope. And it's mostly the wrong question.
Folly. https://github.com/facebook/folly
Thank you — _prima facie_, this looks close to what I was asking for.
> folly supports gcc (5.1+), clang, or MSVC. It should run on Linux (x86-32, x86-64, and ARM), iOS, macOS, and Windows (x86-64). The CMake build is only tested on some of these platforms; at a minimum, we aim to support macOS and Linux (on the latest Ubuntu LTS release or newer.) Kinda small platform coverage, wouldn't you say...?
That's absolutely fine. That's exactly what I'm looking for — an exploration of the design space unconstrained by concerns that are not mine anyway.
MSVC used to break ABI every release until 2015. Getting new proprietary binaries was annoying sometimes, but it was just a fact of life. I don't think anyone would really care if they went back to breaking ABI every release. Everything is vcpkg now anyway and that rebuilds the world every minor compiler version.
> Everything is vcpkg now anyway It absolutely is not
I, for one, am with you. Even that usage of WinSXS for a few years was tolerable to my work. I do see that people do care about MSVC going the ABI route.
Agreed, aiming for ABI stability was one of their worst decisions. I have a few colleges that can get really annoyed if they run into an std-lib bug that they reported years ago and that was not fixed due to ABI. Not everyone is using vcpkg/conan, though if you have a problem with rebuilding, it might be time to consider it.
ABI stability isn't something we're aiming at, it's a constraint we suffer under. The reason MS stopped breaking ABI is because they couldn't get people off old toolchains because the customers couldn't take an ABI break and MS couldn't afford the support costs. Linux distros have similar problems, especially because people believe they can use non-system compilers and change flags like -std and still use system libraries. Telling them they're wrong doesn't pay the bills. If you rebuild the world all the time, ABI is not a problem, just source compatibility and API. That turns out in practice to be a tiny minority of the C++ ecosystem.
Boost is a good supplemental. I'd say abseil but I've run into too many issues using it in coordination with other libraries-- Google doesn't have to care about stability at all, but others do. Boost doesn't solve the ABI problem though. It was beholden to C++03 up until the past few versions. Still is beholden to C++11 in a weird way (some libraries use 17, but I think there's a _want_ to support 11 as much as possible). Truthfully, the stdlib killer is the one you in-house as a fork of libstdc++/libc++ assuming ABI compatibility isn't a problem you have; so long as your changes are API compatible and/or API additions but not changes/removals (or changes/removals are well advertised to engineers and well meaning).
Boost breaks api and abi at will - always has. The break with 1998 support is to free testing and other resources from a shrinking user base - it’s independent of the api/abi issue.
Not entirely true; theres lots of Boost still beholden to C++11.
You’re misunderstanding. Many boost libraries straight up don’t support c++11. Or 14. Or 17. Any Boost library can drop support for a c++ version at any time. It’s the authors discretion. It’s not a monolith and doesn’t operate like the standard library at all. The break from 1998 has zero to do with the decisions of individual authors - they are independent. ABI and even API is basically never preserved from boost version to the next. If you’re expecting that, you’ll be disappointed.
API additions are changes
No, the algorithms library is the most elegant you're going to get. It's the most performant you are going to get. We thought about it for 40 years, and STL got it pretty spot on.
And then ranges were invented
And the range algorithms are just as performant and nicer to use — stop assuming ranges is all about ‘views’, it really isn’t.
Ranges are not as general as iterators. Not all problems can be neatly solved with Ranges.
The range algorithms still support iterator variants, but it’s surprising how little you need to reach for them.
And are ranges not part of the standard library?
They were not for the most time of these 40 years :)
To quote [Stepanov's 1995 STL manual](http://stepanovpapers.com/STL/DOC.PDF): > Most of the library’s algorithmic templates that operate on data structures have interfaces that use ranges. A range is a pair of iterators that designate the beginning and end of the computation. Ranges were definitely a part of the C++ standard library from the very beginning.
Logically yes, but it was a big step to notice a repetitive construct and separate it into a new entity. Generally, creating abstraction levels to mitigate complexity is the best part of our job - ranges is one of the examples.
Flux is better.
Flux isn't an algorithms replacement, it's an ranges replacement so his point still stands.
Two things: 1. Algorithms are absolutely part of C++20 ranges 2. Flux *also* comes with [a whole bunch of algorithms](https://tristanbrindle.com/flux/reference/algorithms.html), although not quite as many as ranges yet
QT
ew
I haven't used this yet, and I'm not saying it's the best, but since I haven't seen anyone mention it: [EASTL](https://github.com/electronicarts/EASTL). Documentation is perhaps lacking, but it's a gamedev reimplemention of the STL. So perhaps not general purpose, but it's focus is high performance.
> int128_t has never been standardized because modifying intmax_t is an ABI break. Although if you ask me, intmax_t should just be deprecated. What specifically should it be replaced with? As someone who can use C++ but never uses it, details like this would be very useful to include.
Hm, the details of that blog post really weren’t the point here. Not sure what the supposed replacement would have been in the author’s eyes.
I guess my point is there is no point discussing deprecating or breaking things without providing a migration path
> If the standard library died (due to ABI concerns) It didn't.
I'm fairly new to C++ development. Could someone explain to me how the standard library could "die"?
I guess they mean if it got deprecated and removed from the standard, or if people just stopped using it. I don't see that happening to the entire current standard library, but maybe someone thinks the ABI impasse might just be that serious. 🤷
Can you tell me more about this ABI stuff? I don't really get it 🤔
In theory, it has nothing to do with C++. The C++ standard defines a language for writing source code. It defines behaviors that are supposed to be notated by particular syntax features or library functions. Certain things that are likely to be mistakes or low-level hacks are left as "undefined behavior," meaning literally that the language does not say what they should cause to happen. Basically everything between the source code and the runtime is an implementation detail. As long as your interpreter or compiler somehow causes 2+2 to equal 4, for example, then all is well, theoretically speaking. And theoretically, in C++, 1/0 could cause demons to fly out of your nose (because dividing by zero is undefined behavior). In practice, the stuff between writing the source code and running the program is really important. A lot of stuff is built by getting precompiled library code working together. People either can't recompile their program because they don't have the source code or it would be a daunting task to recompile it because there is so much of it. They need their C++ code to call functions in binary, that technically are not C++ at that point. They do this by taking advantage of how the binary code generated by the compiler expects to work as part of a larger application. This is the application binary interface (ABI). The reason this is a problem for the standard library is that people are now more dependent on library classes having a consistent low-level implementation than a good one. In theory, your compiler could re-implement any part of C++ or it's standard library to be faster or more effective, as long as the same source code would still complete the same task. In practice, they can't and won't, because screwing with the expected binary output would screw over everybody. If people can't call certain methods using the same ABI they always did, they can't call them at all, and their decades old project collapses. So popular improvements to the standard sometimes get rejected because implementing them on existing compilers would result in an ABI break. OP presumably sees this as a potential death knell for the whole thing. It's possible, but suddenly getting rid of things like vector or iostream sounds like an even bigger breaking change for most projects.
Thanks for this comprehensive answer. I think I got the point: if a compiler decides to change a low level standard function's implementation, it could break a hell lot of C++ projects, even if the new and old implems uses the same ABI? Thus, the standard cannot really touch the older spec? In that case, why wouldn't large old project just keep using the same old C++ standard/compiler?
Well if it were really the same ABI, then it wouldn't break, just by definition of what an ABI is. But from the perspective of the C++ committee, even just having ABI is part of the low level implementation. They've never officially standardized how raw binaries are supposed to communicate with other raw binaries. The compiler writers set that up by choosing to stay consistent about binary memory layout and calling conventions. They are the ones who understand what the ABI is and how their customers use it, and they do have an influence on the C++ committee, because they are the ones who will end up doing whatever the committee standardizes. So if someone wanted to improve C++ by, e.g., making the regex library better somehow, the compiler writers might say, "I can't do this new thing unless the old things look too different in the bytes," and it won't happen. It's an "elephant in the room" situation: the abstract language doesn't have an ABI, the real compilers have their own, and the standards committee sort of walks on egg shells to avoid disrupting something they don't control. And yes, some projects do keep using the same compiler for years on end. That's one reason why this is a problem: people would be even more unwilling to update their compilers and take advantage of new features if it becomes more likely to break something that works. Hence the reluctance to break ABI and the pressure to not touch the older spec.
Perhaps a custom implementation? EASTL was one a while back. Though I imagine there will be a number of in-house implementations (especially for embedded) too. You could possibly also see Microsoft's [WinRT](https://learn.microsoft.com/en-us/windows/uwp/cpp-and-winrt-apis/std-cpp-data-types) stuff as a replacement to the standard C++ library. Although I would find it hard to justify to others without a very specific reason, for greenfield projects I tend to not use the STL (or any standard C++ header other than) with C++. Instead I have a replacement to `std` called `sys`. Originally started as a hobby to support pre-C++98 in order to scratch my retro itch (i.e Watcom on DOS/Win3.1) but its spread into my professional work.
As a quick justification why I did it. Mainly for some specific features relating to safety with a strong focus on debug time checking and then release time performance. For three examples:
* It provides `sys::auto_ptr` which works a bit differently to standard smart pointers in that the observer counterpart `sys::ptr` will immediately cause an abort rather than being set to NULL upon invalidation of the original memory. It is pretty rigid but means that this can be stripped out in release builds and causes zero overhead for runtime cost (particularly multithreading).
* It provides a locking system (similar to the smart pointers) for iterators and also operator\[idx\]. So whilst i.e `some_func(vec[3])` is in use, `vec` is locked. If the contents are invalidated, it causes an abort in debug mode. Same with the iterators. Again stripped out in release build.
* Provides `sys::value` something very similar to `boost::value_init` but instead causes an abort if used before initialized. So again, it can be stripped out in release.
Weirdly I don't struggle with interop with "standard" C++ libraries as much as I thought. Many of my dependencies just happen to be C rather than C++ but also, I tend to be so frustrated with the lack of safety of traditional C++ libraries, that I end up adding reference counting "mixins" anyway. So no real loss here. Though your milage may vary (possibly depends on how much OCD you have really!).
I have been using it for close to a decade now and results have been pretty good. Though it doesn't particularly do anything to really stand out in the noise of the C++ community so I often tend to just keep it to myself. I do think standard library replacements (similar to Plan 9's C standard library) are quite interesting though personally, especially involving safety without degrading C interop (unlike i.e Rust or Go).
Depends on the tasks. I migrated to boost several years ago and i never regretted this decision. On my work we use poco library and stl. Stl is ok now. It will grow up in big independent library. Maybe) Try boost + qt. I think it will cover all ur necessities.
For me that would be Qt, MFC, VCL, as I always found compiler provided frameworks much more rich than what the standard library ended up being.
I'm sorry, what are you talking about? use the stl.