- Just like a number of (unrelated) algorithms, the sweet spot can be found in the middle between two (or more) optimal solutions.by notorandit - 1 week ago
Precision is sometimes useless due to time and computing resources constraints. Speed can come at the cost of poor results.
I think the answer depends upon the specific use case and environment.
- I've heard about scanning of the stack but I'm not sure if I get it. Is the strategy literally to not keep track of references at all, and simply do a sequential scan over the entire memory looking for bytes that look like they're pointers into the heap, and then assuming they are roots? And then you look them up in the allocation records and mark them as still in use? You'd have to scan them in turn as well to know if they've got references to other heap locations right?by tinco - 1 week ago
Edit: ah he says assume the heap is precisely traced (somehow?) so I guess it would already been known what references are in there.
- > A compiler that does precise root-finding will typically output a side-table indicating which slots in a stack frame hold references to heap objects. These lifetimes aren’t always precise, in the sense that although they precisely enumerate heap references, those heap references might actually not be used in the continuation of the stack frame. When GC occurs, it might mark more objects as live than are actually live, which is the imputed disadvantage of conservative collectors.by neonsunset - 1 week ago
This is not necessarily accurate with true precise tracking E.g.:
This works in much more advanced scenarios too.using System.Runtime.CompilerServices; Example(); // Skip Tier 0 compilation which does not track gcrefs as precisely [MethodImpl(MethodImplOptions.AggressiveOptimization)] static void Example() { var obj = new object(); var wr = new WeakReference(obj); for (var i = 0; i < 3; i++) { Console.WriteLine(obj); } GC.Collect(); Console.WriteLine(wr.IsAlive); }
Unfortunately, I can't link a simple document that covers this in detail from the top of my head but there's a wealth of information in Konrad Kokosa's works:
.NET GC internals lectures: https://www.youtube.com/watch?v=8i1Nv7wGsjk&list=PLpUkQYy-K8...
Pro .NET Memory Management (which is a great book in general): https://prodotnetmemory.com/
- I'm not sure what this article is trying to convey to me. I can only suppose the intended audience is entirely academia-centric.by DarkNova6 - 1 week ago
Reading the conclusion, I don't know if anybody working on actual top-tier GCs (Java, JavaScript, C#) finds this useful. It strikes me more as a somewhat interesting factoid, as demonstrated by a paper.
The conclusion:
> When it comes to designing a system with GC, don’t count out conservative stack scanning; the tradeoffs don’t obviously go one way or the other, and conservative scanning might be the right engineering choice for your system.
If there would be examples of how this relates to actual GCs in production and compares them, now that would be interesting.
- I've never understood why anyone would use a conservative collector outside toy programs or academia. It is hard enough to make programs deterministic even with precise collection. I can't even imagine releasing software that was inherently non-deterministic and could suddenly, and without notice, start retaining memory (even if atypical in practice). Thus, IMHO, which is faster is a moot point.by nu11ptr - 1 week ago
- It was a bit of a bummer when go switched from the conservative gc to a precise gc. One of the implications was that they needed to change how interface types were represented.by chrsig - 1 week ago
They had a nice little optimization for word-sized values to store in-place rather than as a pointer out to a value. With the precise gc, they had to make the change to only storing pointers, leading to allocating small values.
I don't know if they've done work to (or perhaps better put: had success) regain the performance hit from the extra allocation & gc load.
On the flip side, my experience is that they've made the pretty unobtrusive with regards to latency and pauses. Or perhaps I'm just not stressing it as much as I had in the past.
Random anecdote on gc tuning: I was once dealing with a go process that sped up (higher throughput, lower latency) by an alarming amount limiting the max processors. This was many years ago, and I wouldn't be able to say what version of go. It was after you no longer had to set GOMAXPROCS, but probably not very long after.
Performance tuning is crazy sometimes.
- “One day a student came to Moon and said, I understand how to make a better garbage collector. We must keep a reference count of the pointers to each cons. Moon patiently told the student the following story:by cancerhacker - 1 week ago
One day a student came to Moon and said, I understand how to make a better garbage collector...”
- I would have assume the major benefit to precision is that it enables compaction…by gok - 1 week ago
- Conservative GC is quite vicious against the use of lazy lists:by kazinator - 1 week ago
You have code like this:
where function walks down the list:function(make_infinite_lazy_list());
problem is, the parent stack frame contains a spurious copy of the original return value from make_infinite_lazy_list, and so as function marches down the list, the discard prefix of the list is not becoming garbage, as expected.fun function(list) { while (list) { // nil is false ... list = cdr(list); } }
This is the showstopper for conservative GC. Not the bogeyman of a stray machine integer suddenly looking exactly like a heap pointer.
Stick a conservative scanner under C, make yourself some lazy lists, and this problem will easily reproduce!
- Of course it can be faster because it's wont need a shift, ptrs not a mask and objects not 2 words.by rurban - 1 week ago
But usually you want to free ints which look like ptrs earlier, sync don't keep them around. It's rare, bug when happening a memory hog. Esp. On long running processes I would only do precise GC. Those simple bit shifts and masks cost almost nothing today.
For C interop you need to track pointers anyway separately.
- I find this to be more of an interesting observation than a reason to choose a conservative scanner. I've never really thought of conservative GCs as faster or slower. The stack scan is fast, simple, and prefetchable, but occasionally has to do extra work. The precise scan has to consult separate tables, often not stored nearby. I guess I'd expect the speed difference to come more as a result of conservative pointers getting pinned and what repercussions that has on the overall collection. I did think it was interesting that precise scanning can sometimes be more conservative than a conservative scanner because of excessively long live ranges. It's great to have a wingo writeup on these things.by sfink - 1 week ago
Precision gives you a lot more flexibility in moving data. Conservative scanning is way more pleasant for embedding (you don't have to go through a ritual to register every root). The difference in what gets collected rarely seems to matter much in practice, though the determinism can matter, especially for intermittently failing tests.
There's a lot of path dependence in choosing one over the other. It's easy to go from precise to conservative, but very hard to go the other way. In practice, that means you tend to stick to whatever you've got—if you have a working precise collector, you're loathe to loosen it up because you might get stuck. I was heavily involved in moving the SpiderMonkey collector from conservative to precise, and I wouldn't like to lose that work, and I'd really hate to have to do it again! And yet, the maintenance cost of keeping the precise collector correct is not low.
I suspect a lot of the argument for precision boils down to: bump allocators go brrr. With a precise scanner, you can bump allocate into a young generation, move everything out during a minor GC without regard for conservative pinning, and then do it over again. No looking at allocation bitmaps or free lists or whatever. Easily JITted (at least for the basic allocation), good locality, simple code.
A more tenuous guess is that a lot of the argument for being conservative boils down to ease of embedding. You mostly don't have to think about registering your stack roots, you can just allocate stuff and call things.
- I'd be curious to run experiments on this with one of my Go services. However, as far as I can tell, Go's garbage collection is precise and there's no way to run a Go program with a different (conservative) garbage collector. Am I overlooking something or am I out of luck here?by tmendez - 1 week ago