Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Eamon Nerbonne

Eamon Nerbonne emn13

Niner since 2011

  • The zen of async: Best practices for best performance

    Is there still a chance that the unhandled exception change might be reverted to .NET 4 behavior?  This new behavior sounds quite scary.  In many (if not most) cases, if a Task is not observed nor waited upon, it is interesting for its side-effects.  Such side-effect may or may not have completed, or worse, have only partially completed when the task aborts due to an exception.

    It's clearly dev-unfriendly to observe that exception some non-deterministic (but potentially long) time later, but that's a hardly an argument for the change: it's much worse to get non-deterministic and potentially unnoticed state-corruption or deadlock.

    Furthermore, it's inconsisent with normal behavior and encourages bad habits.  You really don't want people throwing exceptions and ignoring them; that's the road to very-hard to debug code.  If such code were to be called in a synchronous fashion it would occasionally fail, leading to an unintuitive behavior difference - and one which encourages just adding a try with an empty catch block to the synchronous variant to maintain parity; a situation that makes the code much harder to maintain in the long run.

    Swallowing asynchronous exceptions by default strikes me as being a small, short-term gain in simple scenarios at the cost of higher maintenance costs in more realistic scenarios in the long run.

    A final small detail: In your web caching scenario, you don't add to the cache until the Task completes.  You could just use the more concise ConcurrentDictionaries GetOrAdd method with the added advantage of a much smaller window in which the cacher may make multiple identical requests.

  • 50 performance tricks to make your Metro style apps and sites using HTML5 faster

    Great session!  I was curious as to the claim about JS perf being just a factor 5 behind C++, however, and I tried to reproduce the result the talk mentions, in which a simple JS loop takes 200ms compared to C++'s 40ms.

    The proposed benchmark failed to actually use data from any of the loop iterations except the last; actually running that code took essentially no time in C++ due to optimization.  After converting the loop to a summation, C# and MSC took less than 10ms, and gcc less than a microsecond (!) as opposed to the claimed 40ms in the talk; this was due to the fact that gcc was probably able to factor out some of the arithmetic into fewer multiplications.  Using the following (similar function) proved too complex for gcc's optimizer:

    long long DoMath(int val) {
        long long result=0;
        for(int i=0;i<100000;i++)
            for(int j=0;j<10000;j++)
               result += i+j*j+val;
        return result;
    }

    and the equivalent javascript:

     function DoMath(val) {
        var result=0;
        for(var i=0;i<100000;i++)
            for(var j=0;j<10000;j++)
               result += i+j*j+val;
        return result;
    }

    In this formulation (note that I increased the loop iteration count to be able to more accurately benchmark) I obtained the following measurements:

    C# (.NET 4.0): 1.3s

    Visual Studio 2010 MSC(C++): 1.01s

    GCC 4.6 (C++): 0.85s

    Chrome 15.0.874.24 beta-m (JS): 3.0s

    IE9 (JS): 85s (approx)

    Firefox (JS): 150s (approx)

    The claim that modern JS engines are only 5 times slower than C++ seems fishy, certainly when supported by the example used in the talk.  I had to go through several iterations of the benchmark to confuse the C++ optimizers enough to get there; even then only chrome manages to get within a factor 5 whereas IE9 is a full factor 100 slower and Firefox almost twice that.  Furthermore, this is an optimistic benchmark that uses solely very basic types and arithmetic, and thus doesn't expose any of javascripts tricky dynamic name-resolution, so it's likely to be a best-case scenario.

    The claim made in the session is unrealistic (which is a bit of a shame, since javascript is much faster, and the rest of the talk has lots of interesting pointers.)

  • C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL), 8 of n

    How does the regex implementation compare to RE2?