The other day: “hmm building blender got quite a bit slower” (by 10-15%).
-
The other day: “hmm building blender got quite a bit slower” (by 10-15%). What caused it? Of course, enabling “plz C++20” instead of previous “plz C++17” compiler option. This is without actually *using* any of 20 features yet.
😭
@aras This is a big and common thing with C++20.
A bunch of things become constexpr and inline in C++20 is my memory, significantly increasing the cost of the standard library headers in _every_ compile. =[
-
@aras This is a big and common thing with C++20.
A bunch of things become constexpr and inline in C++20 is my memory, significantly increasing the cost of the standard library headers in _every_ compile. =[
@chandlerc yeah, plus I guess under 20 various headers become much larger (ranges & whatnot)
-
@chandlerc yeah, plus I guess under 20 various headers become much larger (ranges & whatnot)
@aras @chandlerc What we (ALICE Experiment at CERN) see is actually an improvement in compile time for our stuff, whenever concepts are used over SFINAE or similar pre C++20 tricks. I would be interested in knowing if you experience the same, once / if you get to it.
-
The other day: “hmm building blender got quite a bit slower” (by 10-15%). What caused it? Of course, enabling “plz C++20” instead of previous “plz C++17” compiler option. This is without actually *using* any of 20 features yet.
😭
@aras you're not even safe in an STL-free codebase because C++20 blows up C math.h too. that said it's not much work to write your own header with pi and 20 lines of `extern "C" float sinf(float);` etc, and we saw enough of a speedup from it to be totally worth. it's also nice that you can do `void sinf(double) = delete;` to catch accidental doubles
-
The other day: “hmm building blender got quite a bit slower” (by 10-15%). What caused it? Of course, enabling “plz C++20” instead of previous “plz C++17” compiler option. This is without actually *using* any of 20 features yet.
😭
@aras that's why I stayed in C realm for my personal projects but even so it's still slow
-
@aras @chandlerc What we (ALICE Experiment at CERN) see is actually an improvement in compile time for our stuff, whenever concepts are used over SFINAE or similar pre C++20 tricks. I would be interested in knowing if you experience the same, once / if you get to it.
@ktf @aras We've seen no meaningful improvements in compile time due to concepts TBH. Not that they're bad for compile times, but most of it doesn't go to extraneous instantiations in `enable_if` that concept checks avoid.
Much more from excessively slow/complex checking of inline function bodies in every translation unit.
-
@chandlerc yeah, plus I guess under 20 various headers become much larger (ranges & whatnot)
@aras libc++ in particular has done good work to factor its headers and generally avoid any of the STL headers from becoming significantly larger from ranges or similar things. Those you only pay for _if you use them_.
Don't get me wrong, ranges as standardized cause _obnoxiously_ slow compile times, and I would generally advise against every including them into code. But you shouldn't pay the price unless you actually include a range header by and large... And if you do, might be worth filing a bug.
-
@ktf @aras We've seen no meaningful improvements in compile time due to concepts TBH. Not that they're bad for compile times, but most of it doesn't go to extraneous instantiations in `enable_if` that concept checks avoid.
Much more from excessively slow/complex checking of inline function bodies in every translation unit.
@chandlerc @ktf @aras I dread to ask, but any data on whether modules help or hinder any of this?
-
@chandlerc @ktf @aras I dread to ask, but any data on whether modules help or hinder any of this?
@JSAMcFarlane @chandlerc @ktf @aras
Come to my talk at 'using std::c++' (Madrid, Spain) to learn about my measurements from a real production codebase.
-
The other day: “hmm building blender got quite a bit slower” (by 10-15%). What caused it? Of course, enabling “plz C++20” instead of previous “plz C++17” compiler option. This is without actually *using* any of 20 features yet.
😭
@aras The good news is you can go back to C++11 for juicy compile time speed ups! Just need to force everyone to exercise restraint ;)
-
@aras libc++ in particular has done good work to factor its headers and generally avoid any of the STL headers from becoming significantly larger from ranges or similar things. Those you only pay for _if you use them_.
Don't get me wrong, ranges as standardized cause _obnoxiously_ slow compile times, and I would generally advise against every including them into code. But you shouldn't pay the price unless you actually include a range header by and large... And if you do, might be worth filing a bug.
@chandlerc @aras libc++ only has remove-transitive-headers as opt in, and by default it’s actually quite bad in any standard, and I don’t think C++20 is different - even if you don’t get <ranges> specifically.
-
@chandlerc @aras libc++ only has remove-transitive-headers as opt in, and by default it’s actually quite bad in any standard, and I don’t think C++20 is different - even if you don’t get <ranges> specifically.
@chandlerc @aras Although I suppose you could make the argument that without remove-transitive-includes option it's so bad that C++20 doesn't make it *that much worse*.
-
@chandlerc @ktf @aras I dread to ask, but any data on whether modules help or hinder any of this?
@JSAMcFarlane @chandlerc @aras For us, it’s complicated. Getting everything ready for modules, including dependencies, it’s a nightmare. Moreover we do support Apple Clang.