Tech Off Thread

6 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

FxCop advise vs Release Mode Optimisations

Back to Forum: Tech Off
  • User profile image
    Dr Herbie

    Hello all.

    As an exercise, I decided to take an open source library for .NET 1.1 and convert it to .NET 2.  (I used SharpZipLib as my test project, as I have previously used NZipLib, its predecessor, and wanted to see what had changed).

    After handling all the compiler warnings that a straight upgrade of the SLN file produced, I decided to run FxCop and looked at the performance rules. The main rule that repeated was initialisation of variables to null/0/false. So I modified the code and ran a micro-benchmark (zipping a directory of junk a few times and taking a time lapse).
    The original, unmodifed SharpZipLib code was a few seconds slower that the modified version.  Great.  Or so I thought.  I realised that I had run the benchmark in debug mode, so I repeated it in release mode.

    The result?  No discrenable difference between the two versions.

    So what happened?  I'm guessing that the optimisations in release mode made to the unaltered code as good as the hand-altered code.

    So whats the point in FxCop telling us to make changes that the release optimiser will do for us anyway?

    Is there anyone here who knows more about optimisation/benchmarking who could shed light on this?


  • User profile image

    The absolute best place to go read about this is Rico Mariani's blog. He covers loads of performance analysis of .NET and it's absolutely loaded with tips about how to profile things.

    It's quite possible, in this case, that SharpZibLib had already been well performance tuned and so there just wasn't much opportunity for improvement.

  • User profile image
    Dr Herbie

    The SharpZipLib library is well written (there weren't many items in FxCop really).

    But the original version ran slower in debug but not in release, which I can't figure out.
    It's possible that the manual changes to optimise the code interferred with what the optimiser could do so that the original was optimised in a more efficient manner.
    I guess I'll have to try this with other code to see if this one was a fluke.

    Show the value of the rule to profile before and after hand optimisation, to see if it's better or worse.

  • User profile image

    There are a few rules in FxCop that are now redundant due to the optimizations now performed by both the C# compiler and JIT in with optimizations enabled.

    Code such as the following which is flagged by DoNotInitializeUnnecessarily:

    public class Foo
        private int _Value = 0;
        private string _String = null;

    Is now optimized out by both the C# compiler and JIT (if the language doesn't do it). This could be why you can see no discrenable difference between the unmodified version and the modified version.

    We will probably change this rule so that it no longer fires on .NET 2.0+.

  • User profile image
    Dr Herbie

    Thanks for the info David.

    I think I'll ignore performance rules for .NET 2.0 for the time being and just use profiling after the code is written.

    Looking forward to a .NET 2.0 set of rules for FXCop, though. Big Smile


  • User profile image
    Frank Hileman

    Some performance rules may help you. But profiling is always a good thing.

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.