At Microsoft, we are constantly extending and improving our Design Guidelines (DG) document, a set of best practices for managed API designers. Part of the process we go through when changing a guideline is soliciting and incorporating feedback from employees across the company, as well as customers through the use of our blogs (examples here, here, and here).

We're going to start involving Channel9 is the process much more heavily moving forward. In particular, we'll be posting drafts of new guidelines right here in this forum in hopes that we'll get some good customer-employee discussions going. Your feedback will make a visible impact on both the DG itself and the APIs we ship (since we rely heavily on the DG to influence design decisions).

As an example of the type of discussions that get triggered internally by this process, this is a snapshot of about 1/4 of a thread from a few weeks back. Some names have been altered to protect the innocent Wink, but for the most part, it's been left intact.

Hope you enjoy!

From: Brad Abrams
Sent: Tuesday, November 30, 2004 10:31 PM
Subject: Design Guidelines Updates: Security Review (Round 1)

As part of a security review for the .NET Framework I have added a couple of new implementation notes to the design guidelines.   I strongly suspect there will be more to come, stay tuned.  I am working on getting some FxCop coverage for these rules.  Please send me your comments and feedback.    

Thanks

..brad

   Do be aware that null could be passed in for the paramarray.  You should validate that that paramarray is not null before processing (see section X.y on parameter passing).

      static void Main(string[] args)
      {
            Sum(1, 2, 3, 4, 5); //result == 15
            Sum(null); //throws null reference exception
      }
      static int Sum(params int[] values)
      {
            int sum = 0;
            foreach (int i in values)
            {
                  sum += i;
            }
            return sum;
      }


   Do be aware that mutable values may have changed after they were validated.  If the operation is security sensitive you are encouraged to make a copy, then validate and process the argument.
The following sample code demonstrates the issue. The DeleteFiles() method does not create a copy of the filenames array before validation as such it is subject to a race condition.

      /// <summary>
      /// Deletes all the files passed in if they pass validation, throws otherwise.
      /// </summary>
      public void DeleteFiles(string[] filenames)
      {
            if (!ValidateFiles(filenames))

                  throw new ArgumentException("Files must be local");
            Thread.Sleep(1);//force a race to demonstrate the problem
            foreach (string s in filenames)
            {
                  Console.WriteLine("Thread1: Deleting file: {0}", s);
                  /*TODO: File.Delete(s); */
            }
      }
      private bool ValidateFiles(string[] filenames)
      {
            foreach (string s in filenames)
            {
                  if (s.Contains("..")/*TODO: real test */)
                  {
                        return false;
                  }
                  else
                  {
                        Console.WriteLine("Thread1: '{0}' passes validation", s);
                  }
            } //end for
            return true;
      }


Notice we force the race condition with the Thread.Sleep()_call, without the call the problem may still happen depending on other factors on the machine.  

This is exploited by the calling code forking off a thread to change the values in the array after validation occurs.

          static void Main(string[] args)
          {
                   Program p = new Program();
                   string[] names = { "one.txt", "two.txt" }; //init to a set of valid values
                   Thread hackerThread = new Thread(delegate(object obj) //set up hacker thread
                   {
                             names[1] = @"..\..\boot.ini";
                             Console.WriteLine("Thread2: changing 1 to '{0}'", names[1]);
                   });
                   hackerThread.Start();
                   p.DeleteFiles(names); //call the API being hacked
          }

The output from this code is:

      Thread1: 'one.txt' passes validation
     
Thread1: 'two.txt' passes validation
     
Thread2: changing 1 to '..\..\boot.ini'
     
Thread1: Deleting file: one.txt
     
Thread1: Deleting file: ..\..\boot.ini

Notice that Thread1 validates ‘one.txt’ and ‘two.txt’, but ends up deleting ‘one.txt’ and ‘..\..\boot.ini’ because thread2 is able to change one of the values after validation happens.

The fix for this problem is relatively simple, if expensive.  Simply use Array.Copy() to create a local copy of the array *before* validating and performing operations on it.  Notice Array.Copy() only does a shallow copy, this works for arrays of strings as strings are immutable, but if you are dealing with an array of some mutable type (such as StringBuilder) you’d have to do a deep copy for the array.  

      /// <summary>
      /// Deletes all the files passed in if they pass validation, throws otherwise.
      /// </summary>
      public void DeleteFiles(string[] filenames)
      {
            string [] filenamesCopy = new string[filenames.Length];
            Array.Copy(filenames, filenamesCopy,filenames.Length);
            if (!ValidateFiles(filenamesCopy))

                  throw new ArgumentException("Files must be local");
            Thread.Sleep(1);//force a race
            foreach (string s in filenamesCopy)
            {
                  Console.WriteLine("Thread1: Deleting file: {0}", s);
                  /*TODO: File.Delete(s); */
            }
      }

With this change, we throw an exception as expected or delete only the safe values depending on a race.   
 
   Do use finally in all methods that have cleanup code.
Even code that immediately follows a catch() can be interrupted by an asynchronous exceptions (an exception orienting from another thread).  Your cleanup code will be guaranteed to run if you put it in a finally block.  

Incorrect:  Cleanup() will not be executed in the face of an asynchronous exception

      try
      {
            //...
      }
      catch //catching *ALL* exceptions and
           
// swallowing them
      {
            //...
      }
     
//Execute clean up in the exceptional and
     
//non-exceptional case
     
Cleanup();

Correct:  Cleanup() will be executed even in the face of an asynchronous exception

      try
      {
           
//...
      }
     
catch // catching *ALL* exceptions and
           
// swallowing them
      {
           
//...
      }
      finally
      {
            //Execute clean up in the exceptional and
           
// non-exceptional case
            Cleanup();
      }

Many methods perform some form of cleanup.  All methods that perform cleanup should do so in a finally block.  Here is an example:

         
public void UpdateSet(){
            FileStream stream = null;
            try{
                  stream = new FileStream(“SomeFile.dat”, FileMode.Open);
                  //...
            }finally{
                  if(stream != null) stream.Close();
            }
      }


From: Dan Crevier
Sent: Tuesday, November 30, 2004 11:15 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)


Is there a reason you can’t use:

public void UpdateSet(){
      using (stream = new FileStream(“SomeFile.dat”, FileMode.Open))

     
{
            ...
      }

}
 
Dan


From: Brad Abrams
Sent: Wednesday, December 01, 2004 9:35 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)


That would work as well, but it doesn’t explicitly show the finally block (although one exists), so I am not sure it is good from a education point of view..

..brad


From: Karl Gunderson
Sent: Wednesday, December 01, 2004 10:56 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

Brad,

The example should be:

public void UpdateSet(){
   FileStream stream = new FileStream(“SomeFile.dat”, FileMode.Open);
   try{
      ...
   }finally{
      stream.Close();
   }
}


 
Karl.


From: <AnonA>
Sent: Thursday, December 02, 2004 7:05 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)


I’d disagree for a couple reasons

1)       that doesn’t scale with > 1 items in the finally – notice that in this example, a stream2 ctor issue doesn’t close stream1
a.       
 
FileStream stream1 = new FileStream(“SomeFile.dat”, FileMode.Open);
   FileStream stream2 = new FileStream(“SomeFile2.dat”, FileMode.Open);
   try{
      ...
   }finally{
      stream1.Close();
      stream2.Close();
   }

2)       the stream could be set to null by something in the try in the general case, the finally protecting against null helps avoid a useless null ref

IOW, it’s a pattern that would be fine for this specific case, but I think our examples should always promote patterns that are more generally useful.  The by-hand expansion of using() seems perfect to me J



From: Karl Gunderson
Sent: Thursday, December 02, 2004 11:40 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

<AnonA>,

1)  The correct expression of your example is:

FileStream stream1 =  new FileStream(“SomeFile.dat”, FileMode.Open);

try{
     FileStream stream2 = new FileStream(“SomeFile2.dat”,  FileMode.Open);

  try{
        ...
     }finally{
         stream2.Close();
    }

finally{
     stream1.Close();

}


2) According to all examples I have seen including the .NET Framework Developer's Guide:

 http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconusingobjectsthatencapsulateresources.asp

and the C# Language Specification:

 http://msdn.microsoft.com/library/default.asp?url=/library/en-us/csspec/html/vclrfcsharpspec_8_13.asp

All put creation outside the try block when discussing the using statement's equivalence.  If construction fails, there is no need for the finally block, that's the pattern.

The check for null is another matter.  Personally, I feel that the check for null is making assumptions about the implementation of the constructor (how could it return null and not throw an exception??) or other code inside the try block that I am not prepared to make.  The failure of the constructor or some code having the side effect of nulling the object is a separate exception and not within the purview of the finally block.  Additionally, I treat the finally block as the single point of exit from the code covered by the try block, it is a separate exception if the object was nulled unintentionally.  In other words, for me, the check for null is implicitly suppressing a null reference exception.  But that is just me.

Karl.

 


From: <AnonB>
Sent: Thursday, December 02, 2004 2:22 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

Yes, but in this scenario, the new will throw an exception that you'll want to catch. The finally would do something like:

if ( stream1 != null )
{
    stream1.Close();
}

Unfortunately we have a rather inconsistent approach to object creation and you need to check the docs to know if new will throw exception will be raised (therefore, I always put them inside a try/catch)..

Whidbey provides a static creation pattern for things like XmlReader, where you do the following:

XmlReader reader = XmlReader.Create( blah... );

Aside from the fact that this approach isn't as discoverable via Intellisense (unless Intellisense is changed to list both instance and static methods), it's a preferable pattern IMHO.



From: Karl Gunderson
Sent: Thursday, December 02, 2004 1:24 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

<AnonB>,

The fully expressed code might look more like:

try{
    FileStream stream = new FileStream(“SomeFile.dat”, FileMode.Open);
    try{
        ...
    }finally{
        stream.Close();
    }
}catch(IOException e){
    // failed to process SomeFile.dat
    ...
}

or perhaps:

FileStream stream;
try{
    FileStream stream = new FileStream(“SomeFile.dat”, FileMode.Open);
}catch(IOException e){
    // failed to open SomeFile.dat
    ...
    // return or rethrow
}
try{
    ...
}finally{
    stream.Close();
}

or:

FileStream stream = null;
try{
    FileStream stream = new FileStream(“SomeFile.dat”, FileMode.Open);
}catch(IOException e){
    // failed to open SomeFile.dat
    ...
}
if(stream == null){
    // alternative processing without SomeFile.dat
    ...
}else
 try{
        ...
    }finally{
        stream.Close();
    }

This would be the correct why to handle an exception in the constructor.  Still no null check required or desirable in the finally block in my opinion.

In practical terms, I usually do not catch (constructor/creation/file open) exceptions at this level but at the next higher level, i.e. the calling method.  In that case, my original example stands.

Karl.


From: <AnonC>
Sent: Thursday, December 02, 2004 2:54 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

What’s the guidance on the issue of disposing a disposable object after we’re done with it?

Is it at the discretion of the caller to not call dispose if it doesn’t seem important? That sure seems odd. Taking dispose into account, it seems to me like the guidance should be to always use a using statement whenever constructing a temporary disposable object.



From: Brad Abrams
Sent: Thu 12/2/2004 11:35 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

I agree that you should use “using” for temporary objects whenever possible, but when you build your library you should not assume everyone uses “using”… that means you need to do clean up in your finalizer, etc.

..brad


From: <AnonD>
Sent: Friday, December 03, 2004 12:56 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)


But at the same time a finalizer is a very dangerous place to do many types of cleanup, particularly if COM interop is involved.  Not to mention the subtle (and not so subtle) timing issues that come up with this pattern.  I think you are far better off having and requiring the use of an explicit Dispose/Close/Detach/Shutdown/whatever method than to have a free and loose contract that "usually" works.


From: <AnonE>
Sent: Friday, December 03, 2004 6:19 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

You need the finalizer as a back stop, to catch the stray instance which weren’t disposed, or you will leak, which isn’t acceptable if the component is used on a server. You may assert that the Finalize method never fire, to help the users catch the places where they missed a dispose.


From: <AnonF>
Sent: Friday, December 03, 2004 7:24 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

Requiring the use of Dispose misses the whole point of garbage collection.  Objects often have complex lifetimes. If the client (or you for that matter) could reliably know when an object could be disposed (deleted, etc.), then there would be no need for garbage collection.  In COM land, there would be no need for AddRef/Release.  



From: <AnonG>
Sent: Friday, December 03, 2004 9:04 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

This is a common misconception about GC.

GC is about automatic management of the object heap.  It doesn't manage other resources, it doesn't manage locks, it doesn't manage caches etc.  (I don't know if our runtime has this capability but often there's a concept of a "weak ref" to an object so that you can have a cache of GC managed objects where the cache is automatically cleaned up when the collector decides to collect the object; I'll assume we do have this since all the other GCs in the last 20+ years have had it.  But that ignores that you may want to age things out of a cache even though the instances are live via some other transient references.)

Close() is about end of lifetime of the held resource(s).  Not of just memory.  Close() doesn't even imply that the object is collected any time soon.

The thing that's missing from C# here is destructors.  You're left with forcing use of using(){} when you could have just declared a variable and had it go out of scope.  Even in C++ with destructors you often do want a Close() member function anyways so that you can release resources before the variable/instance goes out of scope.

It's seductive to start thinking that GC is about not managing lifetimes, but it's not.  It's only about removing free().  (It's not even about removing delete since the contract for IDisposable::Dispose on GC isn't what you would expect for a destructor since you cannot predict its execution environment.)



From: <AnonF>
Sent: Friday, December 03, 2004 9:21 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

I’m sorry – what part did I misconceive?  De facto, objects manage other resources.  In the .Net framework, Dispose/Finalizer is there to help deal with them.  If an object/class holds unmanaged resources – memory, file handles, registry handles, etc. – then GC is dealing with them.

My point is that there are objects/class that have non-trivial lifetimes – the client may hold references, your app may hold references.  It’s difficult (perhaps impossible) to know when all the references are gone and it’s safe to release the resources associated with the object – be those resources memory, file handles, etc.  If there were not objects of this type, there would be no point to GC (or reference counting).  You cannot require the client to dispose of those objects because

  1. The client may not be  aware that there are unmanaged resources associated with an object (i.e. it’s an implementation detail)  and    
  2. you may still have  references to the object that the client does not know about.  


The design guidelines like to assume that all objects that have any unmanaged resources associated with them have simple lifetimes.  I maintain this assumption is invalid.


 


From: <AnonH>
Sent: Friday, December 03, 2004 10:09 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

The hope one day is for the GC to “know” about all machine resources.  When that happens, we won’t need IDisposable.  For the most part, you can get away without ever calling Dispose() today, but you get two penalties: 1: GC gets less efficient because it can’t predict how much is to be gained by cleaning up an object (an object may have one field which is a handle to a meg of unmanaged memory – the GC thinks it’s only a 4-byte object so doesn’t feel there’s an urgency to collect it *) and 2: running finalizers is more expensive than calling Dispose().
 
If for whatever reason you need predictable cleanup code to run, GC will never help you.  In that case, you should put the cleanup code in a finally.  The guideline isn’t about how do you manage the lifetime of your objects.  It just says, have whatever cleanup code you have be aware of asynchronous exceptions.
 
IMHO the guideline example shouldn’t use Close() or Dispose() since it implies all these confusions.  Use GCHandle.Free() or something.
 
*  There’s a more nasty effect that can happen with particular relationships among managed objects.  Say you’ve implemented a cache of big objects, such as buffers, and you serve them out in little wrapper objects.  If a buffer is requested and none are available in the cache, a new one gets allocated.  The wrapper has a finalizer which returns the buffer to the cache.  GC has a hard time with this situation, because from its perspective, cleaning up the wrappers never has much effect on memory, since all the buffers stay allocated.  In reality, the buffers become “free”, so the app is able to stop allocating new ones.  The GC can’t figure that out.  You can get into OOM pretty easily this way – at least with 1.1.  Not sure if this was addressed in the Whidbey GC itself, but there are new APIs that let an app give hints to the GC about this kind of situation, e.g. GC.AddMemoryPressure().


From: <AnonG>
Sent: Friday, December 04, 2004 10:35 AM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

Sorry but that's ridiculous.

GC is going to know about locking protocols so that I can close files/release locks in the right order?



From: Cyrus Najmabadi 
Sent: Saturday, December 04, 2004 8:44 PM
Subject: Re: Design Guidelines Updates: Security Review (Round 1)

Why not?  It shouldn’t be too difficult to express (especially if you have an extensible GC model).  



From: Greg Schechter
Sent: Friday, December 04, 2004 9:00 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)


IMO, the issue with GC of resources other than memory doesn’t have to do with the ability of the collector to collect non-memory resources, but with the asynchronous, non-deterministic nature of when GCs occur.  Stuff like GC.AddMemoryPressure() for unmanaged memory, and HandleCollectors for things like file handles, hwnd’s, etc., move GC to the point where it can deal with other resources.  However, it still deals with them asynchronously and non-deterministically.  
 
An example where that isn’t appropriate is active database connections on a server that has a limited license of a very small number, say, 4 open database connections at any given time.  Even if the GC knows how to clean these up, an application would want to use deterministic lifetime management (probably through Dispose()) if at all possible to maximize the availability of such finite, non-shareable resources.
 
Greg
 


From: Cyrus Najmabadi
Sent: Saturday, December 04, 2004 10:58 PM
Subject: Re: Design Guidelines Updates: Security Review (Round 1)

Say the DB has 4 open connections and someone tries to open a 5th.  The GC could see that a limit had been hit (similarly to how it must now see that some memory threshold limit has been reached) and it will kick in and say “let me see if these open connections are actually in use”.  If not it can close one (or more) and use it for the new connection.

I don’t see why the GC couldn’t be used in a deterministic manner as long as you program it to do so for a specific resource.  For example, I assume the OS has a limited number of file handles it can give out.  It seems like it should be simple to program the File-Handle-GC to just collect when that limit has been reached.  Why should anyone ever have to every cooncern themselves with filehandles is beyond me.  Yes, they will run into a problem if they exhaust them, however that is the same problem with memory and we’ve alleviated most of the work of memory management with the GC today.



From: Greg Schechter
Sent: Friday, December 04, 2004 11:10 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

In the DB example of trying to open the 5th, where the GC is convinced to reclaim any orphaned connections… to do so, it must potentially perform a full GC, also collecting all freed gen 2 memory, etc.  It then becomes very expensive to do deterministic collection (which is why the use of “GC.Collect()” is discouraged).  In fact, HandleCollector does exactly as you say, but it does suffer from the problem of forcing collection of all memory resources just in order to guarantee freeing of the non-memory resources.
 
Perhaps a solution to that is to have N different GC systems running around simultaneously, all disconnected from each other, each collecting a certain class of resource.  I have no idea if such approaches have been or are being pursued (and I’d imagine getting the interactions and locking of such a model right would be very, maybe prohibitively, difficult), but without that, we have a single GC system that will be poorly used if we try to force it into doing deterministic collection.


 

From: Cyrus Najmabadi
Sent: Sunday, December 05, 2004 12:18 AM
Subject: Re: Design Guidelines Updates: Security Review (Round 1)

 

Sounds just like an implementation problem Smiley

A while back the idea of GC was abhorrent and was laughed at as something that could be used in real software projects.  The same reasons were brought up then. I.e. “too expensive,” “too complicated,” etc. but it’s now been pulled off by many different languages/runtimes.  It’s also clear that for many people (like myself) being “expensive” is irrelevant.  If the feature makes me more productive then I am gladly willing to give up some of the 6 billion operations per second that my CPU can perform in order to take care of this.  I’d gladly take a 25% hit in perf considering that you would now be removing an entire class of issues that I would normally have to be concerned about when developing.  

It’s pretty safe to say that I’m an order of magnitude more productive in a managed language like C# over C++ primarily because I just don’t have to worry about all the issues that arise when I’m in C++ land.  Because of that I can focus more on design and so even though I might take a perf hit, I am almost always able to make up for it by designing something better and more natural in C# which would be prohibitively difficult to do in C++.  

We made a choice and said “you don’t need to concern yourself with memory (beyond making sure you don’t consume all of it)”, I don’t see why we shouldn’t extend that further.  I’m all for taking as much work off the developers hand and putting it into the compiler/runtime.  It will make them more productive and will remove the chance for the errors that we’ve been debating about this entire thread.  Wouldn’t it be nice if all we had to say to developers was “feel free to just use files naturally, we’ll take care of the rest.”

It might sound silly or crazy now (as GC did years ago), but I honestly believe that in 5 years that’s where we’ll be.  I mean, at that point I’ll probably have a machine with 4 procs in it each running at 12 GHz.  If we can utilize those kind of resources to solve these issues then we’re going to lose users to languages and systems which do take care of this for them.



 

From: <AnonJ>
Sent: Sunday, December 05, 2004 9:46 PM
Subject: RE: Design Guidelines Updates: Security Review (Round 1)

 

If that were true, there would be no need for “using” blocks.

 

Greg and others really are making important points. We’re not without data: There’s a fair amount of research and even some products in the area of automatically collecting a few other resources, such as file handles; googling will yield several examples. They are all narrow and limited, and one thing that we’ve learned is that collecting one resource does little for collecting another; each one has its own set of requirements and characteristics, and specifically they have different requirements and characteristics than collecting memory. This is one factor that makes it unrealistic to think that “we know how to do GC for one resource” is anywhere near to implying “we know how to do GC for other/all resources.” (Separately from that, it’s unrealistic to think we’ll be collecting “all” resources anytime soon, because a fundamental fallacy there is that “all” can be known – we might know all the resources our current OS provides, but we can’t possibly know even the names of all the resources a given app will use (e.g., third-party data or comm services), never mind how to collect each one distinctly and automatically.)

 

GC for memory is great and important and has been well understood for 40 years, and modern GC systems are highly tuned to collecting exactly that one resource: memory. They are neither designed for nor appropriate for general resource collection. (Even for memory they’re not perfect or bulletproof, incidentally, but smart folks like Patrick have made them very close to perfect for many classes of apps.)

 

The mistake made by every major GC system I know of is to do GC instead of destructors. As many found out, you really do need destructors; they were forced to add them as an afterthought, in the form of a brittle and error-prone Dispose coding pattern, because otherwise platforms would have been unusable for writing any significant app. It is becoming clear (at least to me) that the ideal is memory GC and destructors.

 

C# does it much better than most by automating it as the “using” idiom, which has the major advantage that it’s easier to write, and to write correctly, than the Dispose pattern because there is some language support. Unfortunately so far it still retains a fundamental weakness in this area which is that it is a coding idiom that is off by default. That is why your users are asking you to emit warnings that fire when a locally created disposable object isn’t actually disposed – because much of the time it is incorrect not to dispose it, even though that is the default.

 

C#’s “using” puts it halfway between the complete “roll your own and hope you did it right” lack of language support, and C++’s strong language support for handling object lifetimes and resources (destructors, auto objects, explicit delete which is optional on the managed heap but still calls the dtor==Dispose deterministically).

 

This is both a correctness and a performance issue. Clearly, failing to release resources can be a correctness issue, but do try to understand why Greg pointed out the perf consequences. Here’s a specific case in point, thanks to Rico for pointing this one out: Last year, a popular website hit a perf issue where they were spending 70% of the app’s time in the GC. After looking at it, the CLR perf team suggested that the developers should dispose of as many resources as possible before making server-to-server calls. With that one change, the percentage of time spent in the GC went down to about 1%. I submit that this would never happen for apps written in idiomatic Whidbey C++ (not available at the time of that case, but available now) because C++ is frugal and contains strong language support for deterministic and early disposal, which fosters a default style that reduces finalization pressure and overhead. In Bjarne’s words, what makes C++ a great language for use in garbage collected environments is principally that it creates less garbage.

 

A piece of good news is that C++’s strong language support for resource management is available even for managed types. I’ll respond separately about why that makes the programmer model for resource management significantly simpler in C++ than it is in Java and other languages.