Hi Patrick,

first let me congratulate for the great job you've done. It's hard to believe that a GC can make apps run faster, but you make this true.

However, there are a number of facts which suggest that Microsoft's overall approach on the subject may be incomplete:

 - in the interview, in reply to a suggestion that the developer could declare the lifetime of an instance, you reply that "if" the developer then starts sharing this instance, this may result in a crash. Are you suggesting that the developer who is clever enough to figure that his app could be tuned that way is not clever enough to carefully handle the issues ?

 - you indicate that instances over 80k are allocated separately, which results in an optimisation since it avoids copying the instance when gc compacts the heaps. What if my app instantiates say 100,000 objects of 79k each ? Why am I missing this optimization then ? Why can't the developer parameterize gc and set ( inside a given range ) the maximum size of objects instantiated in the regular heap ?

 - you confirm that all instances lower in size than 80k are first allocated in the gen 0 heap. Then if the instance survives a gc, it is copied to the gen 1 heap. And later it may be copied to the gen 2 heap, where the long lasting instances live. Don't you think this harms performance in a way which could sometimes be easily improved by the developer ? After all, the developer knows the lifetime of some instances ( not all of them ), so why not let him help gc when he knows that the instance is a long-lasting one and should therefore be allocated immediately in the gen 2 heap ?

 - you reckon that it has taken many years to fully understand the behaviour of all sorts of apps regarding memory, and that at the beginning, gc would dramatically impact performance on servers. You also reckon that the increasing amount of RAM on x64 machines makes it more and more difficult to avoid latencies. Do you really believe that all behaviours can be detected ? Can they even be modelized ? In my belief, many apps have a stochastic behaviour, even at the millisecond level ? Obviously, the adapative policy is good enough to face say 99% of the behaviours for 99% of the needs. Let's even imagine you could detect 99.99% of these behaviours and satisfy 99.99% of the needs. There would still remain 0.01% situations where only the developer himself could be helpful so why deny him the ability to help the gc ?

 - in the interview, you say "developers" shouldn't use gc.collect since this fools gc's behaviour when determining the best policy, but in reply to the question "then why is this API public ?" you say that there are situations where it IS necessary. Isn't this the admission that automatic GC will never be the perfect answer to all situations ?

 - you provide a thorough (and helpful) explanation of the problems you need to solve in order to neatly cleanup instances before they are deallocated. Also, there is currently a lot of litterature around finalizers and the IDisposable interface ? Doesn't this "activity" show a misconception from the beginning ? IMHO, the entire subject is polluted by the heritage of the C++ behaviour of destructors. We're speaking of 2 very different needs here: one is to cleanup objects which hold system resources, and this should be done ASAP, the other is to release memory, which should only be done when memory is needed, or when CPU is available. I think the first need is completely missed with the GC approach, and can anly be satisfied with refcounting. I know that handling refcounting can quickly become a nightmare when instances hold circular references, all c++ developers have had to deal with this when using smart pointers, and I trust you when you say that InterlockedIncrement(&refcount) is much slower than refcount++. However, isn't this slowness insignificant when dealing with objects like database connections or file handles which are precisely the ones for which we NEED an automatic cleanup as soon as they get out of scope ( and not when memory is needed ) ? Also, like -as you observe- most instances actually never survive gen 0 gc, most instances also are only referenced in the stack, which means their refcount would reach 0 as soon as the thread returns from the method where they've been allocated. Do you think that just because this is not always the case is a sufficient reason to not take advantage of the fact ?

 - after all, why should ALL instances follow the same lifetime paradigm ? Why couldn't a particular class or instance be refcounted, and other ones gc'ed ? 

 - in the end, it seems to me that Microsoft is missing the following points:
 - not all .Net developers are dummies when it comes to memory management. The reason for some of us to use C# is not the ease of use, but the fact that it dramatically shortens development time for 99% of the needs, by providing a full-fledged framework and an acceptable (that means good) gc. However, there are areas in an app where good is just nod good enough. Not letting developers manage memory by themselves in these areas because it is inherently unsafe is in total contradiction with the fact that you let them call unsafe code through interop. It is not Microsoft's responsability to trust or not trust developers.
 - not all behaviours can be detected or even modelized. There are situations where only the developer can tell whether it is acceptable or not to be subject to gc latencies, and where therefore only the developer can tell gc what to do and when to do it. The current gc approach makes it almost impossible to use .Net for real-time apps (besides the fact that Windows is missing a real-time API), because it is fundamentally non-deterministic.

However, I do not want to leave the impression that I'm displeased with what has already been done. Java is not doing any better (although the behaviour of the GC can be highly customized). I'm sincerely impressed, even though the current implementation is not perfect.

Also, I wish to say hello from Paris, France, where the weather today is sunny but freezing. Bravo et merci.