Minh said:BitFlipper said:*snip*
I would think this is an overkill / pre-mature optimization. They say the Xbox GC at 1MB allocations, so if you have a small string, say 50 chars, then your periodic GC runs at 1,000,000 bytes / (50 chars * 2 unicode bytes / char) = 10,000 strings / 60 strings / sec / 60 sec / min / 60 min / hr = 2.78 hrs.
So, if all your game does is the fps string, then GC will run once every 3 hrs... Of course, that won't be all it does, but you get the idea.
Optimizations you could do is only update the fps once every second, instead of 60 times a sec... Or there's a SpriteBatch.DrawInt32 user routine out there that doesn't use strings... Or do a pre-compute string table...
The idea is to mitigate the effect of GC, not elimitate it. If you can keep GC to a manual process you do that can't be seen by the player, then that's a huge win.
You are correct that if the only thing my game did was print an fps string to the screen, that would probably not be an issue. Throw in scene management, AI, collision detection, physics, weapons, enemies, sound, particle effects, a HUD, etc etc and things get a bit more complex. A game is very dynamic as there is a lot of interaction between objects and things come and go continuously (think of bullets, etc). Obviously you re-use objects as much as possible, but that is not the only source of allocations. It is possible to delay the inevitable GC, but that is all you can do - delay it.
Right now I am working on my collision detection algorithms. I have a relatively simple scene that I use to test it with. I am getting around 2000 - 3500 fps with this simple scene (on the PC). Looking at my code, I can't see an obvious place where memory
is being allocated, yet the memory goes up roughly ~200KB every second. Granted, at those frame rates, even a small allocation would quickly add up (and it is not the fps counter - I checked that). This means I need to go and dig into the code and see what
is going on. Now if I have a more complex scene, the fps would obviously drop, but at the same time the complexity would go up so I imagine the allocations per sec would remain more or less the same. And as the object graph gets more complex, a GC will become
slower. I have not seen issues simply because currently my object graph is too simple - the GC can very quickly do a collect. But other people are running into this problem - just go to forums.xna.com and search for
Edit: "garbage" is a better term to search for here.
My point is - we should not have to worry about this (within reason). Why do we need to worry about the internal memory allocation details of the .Net classes when we use a high-level language?