Also, I've seen spots in the MSDN documentation where it recommends pre-allocating a bunch of objects (ex. the collection documentation) to improve performance. I can see where stuff like that would mess up the garbage collector, now unless I'm adding things to the collection in a simple loop, I think I'll dynamically resize using the .add or .addrange methods. I know the C++ STL had issues with that because it would use its copy constructor to move all the old stuff over to a new object one element bigger, it sounds like the .Net framework handles it smarter than that.
For a concurrent gc could you create a set of trees of allocations, essentially each potential scope in the source would have at least a leaf in the tree set, as things get allocated in the scope it adds leaves to the tree (sort of like how Sun's ZFS does its "instanteous snapshots" I think), then when it comes time to do gc you find out which scope your in, and every sub tree on a lower node in the set can be safely pruned. Presumably an app isn't just spinning around doing allocations, it actually is adding data to the objects, running through logic etc, so at the cost of adding a step in the allocation process you gain scope level knowledge of the allocations in the app.
You could then whenever the gc gets the CPU (or on a spare one ) mark the node where everything lower than it is dead, and start deleting it. If you get interrupted it would be fine because when you get the CPU back you still know everything below that is dead and can continue. I think in such a scheme the only time you'd have to block execution of the app would be if when the gc gets the CPU back, the app has gone back into something it thinks is dead. Then it would have to force the completion of gc on that part of the tree, before it allows the app to go back into that scope.
Perhaps that is how the gc works now, I'd love there to be another video that goes into the nitty gritties of the gc process.