Minh wrote:

DoomBringer wrote:

Minh wrote:


Littleguru wrote:
- Why isn't it possible to force the GC to do a collect. I mean GC.Collect() just signalizes that a collect is required, but the GC could decide to do not. Why has this been implemented in that way?


If this is indeed true, can't we have a GC.Collect(bool meanIt); ?


Probably because the GC knows more about current memory than the developer does 99% of the time.  Forcing a GC when there are only a few dozen objects that need cleaning up is a bad idea.  GC collection is a high overhead operation and calling it arbitrarily is bad practice nearly all of the time.

I don't agree. The end-result for a GC.Collect is ALWAYS a less fragmented memory space. So there must be a penalty if they don't want us to do this all the time.

I'm guessing that penalty is the runtime becomes unresponsive for a while. In a game, you'd do anything not to have the GC runs its Collect routine in the middle of a game, and if you could, you should minimize that chance when it's OK to run at a lower framerate, say, the user is shopping for a new sword with his loot.

Now, the crazy desktop app programmer who thinks he knows better by running GC.Collect everytime the user click on a button will be punished by having his app run like crap --- but at least that's a positive feedback for him to remove that GC.Collect.

If you can't deterministically run GC.Collect() as a game programmer, you're punished by a run-time that think it knows better.

It will always be less fragmented, yes, (except the very edge case of course) but the CPU time required for a full GC round is pretty significant.  Moreover, with how much memory most systems have these days, your app might not even get any benefit in the long run: if I compact memory after allocating and dropping only a few dozen megabytes, well, that isn't all that great, since you've got hundreds more usually.  (The story on the mobile side is different, however, since there is less memory.  I don't know the mobile stuff as well really.)

I'm thinking the best time for a GC is after you've blown through a lot of temporary objects, and are moving on to another very large round of allocations.  It probably hurts more to get interrupted during the middle of your allocaiton phase, instead of before any at all.  I'm talking about a large amount of data here too, something that you know is going to eat up a lot of RAM (several dozen or more megs).  The average "Oh Look A WinForm" app isn't going to use all that much.

I don't know how games would deal with it, but I would guess you could build a strategy to deal with it.  I'm thinking that the majority of item creations are going to be during load sequences, so having a GC run in there wouldn't be so bad.

Adding a boolean flag to indicate "ya, srsly" to the GC is kind of silly.  No developer would ever use false, because they'd all think that they knew what they were doing.

Finally, I just think that if somebody is writing an application that needs ultra-fine-grained control over memory control and demands high performance, then it probably needs to be written in native code.  A business app that pops up a WinForm and queries a few databases isn't going to need to have millisecond precision, after all, but Gears of War will.