In most GC languages, there are no "memory boundaries" between threads, so when the garbage collector runs, all threads of execution are paused - at least, in a majority of in-use implementations. Speaking of a majority of in-use implementations, most don't run the garbage collector until the program runs out of memory. This means that most GC programs will 1. continue to use more and more memory and 2. randomly freeze while memory is analyzed and released
|
That's oversimplification and not true. It is true only for some early GC implementations which are *not* used in the mainstream. This is minority, not majority. They may be used in new things like D, Go or Rust. Mainstream platforms like .NET, Java or even JS have much more sophisticated GCs available that do GC concurrently / incrementally.
Three examples:
1. Azul C4 - compacting, fully concurrent, works with huge heaps: doesn't pause mutator threads at all during collection and compaction
2. Oracle CMS - non compacting, but mostly concurrent: pauses only for (short) initial remembered set scanning; the main collection and sweeping phases are done without pausing. Criticized for its non-compacting nature leading to fragmentation (just as in standard manual memory management) = it is going to pause in a situation where a typical C++ app would fail to allocate memory.
3. Oracle G1 (the new one) - compacting, incremental, works with huge heaps: splits heap into smaller regions GCed separately, in small chunks, therefore pauses can be very short. Offers possibility to set desired maximum duration of pause.
All of the above start GC long before the application runs out of memory and offer a huge number of tuning options (if you're unhappy with the autotuning which most of the time just works fine) and all of them have typical throughputs counted in GB / s on modern hardware. There may be some extreme use cases, like applications producing garbage at rates > 1 GB / s causing GC to lag behind and choke, but they are far from normal.
Even the earlier, non-concurrent, non-incremental collectors typically split heaps into multiple generations, so most collections do not touch the whole heap and are executed in single milliseconds. If you're only little careful, you can make them run for weeks without a single major pause.
So what's wrong here? Well, we've come back to square one: you still have to be educated to use these forms instead of the more direct forms |
Compilers/IDEs already issue warnings if you "forget" to use try-with-resources or forget to close/dispose the object. Also, ability to forget things is not an excuse; contrary to common belief, Java and C# are languages designed for people who know what they do.
It's not just an add-on, not just a language extension, not just a feature that can be turned off. |
Nope, just allocate everything statically like most embedded programs do and you're in hard-real-time, GC-free world back again. GC just sits and does nothing then. You have to do exactly the same in C or C++, because manual memory management is unpredictable like GC either.
Furthermore, last I knew, all of oracle's "concurrent garbage collectors" that aim to fix 'stop the world' pauses still shut down user threads during heap compaction. I'll have to read up on the G1 collector, but I'm fairly sure it still stops when the heap needs to be compacted --. |
Nope. CMS *does not compact at all* in the normal mode of operation. It may switch to STW only in emergency cases like fragmentation related allocation failure - CMS would cause a long pause, C++ allocator would just throw bad_alloc in this case and the app would definitely crash (because noone typically protects against bad_alloc). I like even a 30 seconds pause once per day more than a crash once per day. :P
G1 does not compact the whole heap, it compacts incrementally in small chunks, so pauses are really short and non-proportional to the size of the heap.
Funny, CMS is so good now that G1 actually has hard time to beat it, even though theoretically G1 is much better.
and the heap will always at some point need to be compacted -- even if it takes weeks of application uptime |
AFAIK there is no possibility to comact heap in C++ applications *at all*. How do you write long running applications then? Oh, by allowing them to use more and more memory until the fragmentation level stabilizes at some point, which is not even guaranteed to be within sane bounds (every manual collector I know of has a usage pattern which can blow fragmentation to 5x or more)... That must hurt.