The .NET CLR Team Tour, Part II

Play The .NET CLR Team Tour, Part II

The Discussion

  • User profile image

    Hey, looks like the new cam doesn't have a built-in stabilizer. How about a new steadicam. I'm sure Scoble won't mind strapping one of these things on.

  • User profile image
    Hate to follow-up on my own post, but I've got video topics request.

    1) Microsoft Games Studio tour.
    It'd be great to see some of the behind the scene stuff. Are they on campus?

    2) The Managed DirectX team.
    They're toiling hard in the shadow of the DirectX team, I'm sure they wouldn't mind some love.
  • User profile image
    You're gonna kill me! Heh.
  • User profile image
    I love this video.  He's very convincing for moving people from COM to .NET when he rattles off all those things you have to do for COM.  And when he says "the perf is there, the GC is unbeatable, you can do it!" I totally believe him.
  • User profile image
    Sven Groot
    Performance is a strange beast. He says "go by the numbers" and I absolutely agree. Recently I was reminded of that, and I almost fell into my own trap.

    The thing is, people (including me) tend to worry about the wrong things when it comes to performance. The GC is a good example.

    What I cam across recently was this. I had to write a simple, maybe 200 line neural net in (plain/unmanaged) C++ for a University assignment (which I should've completed about two years ago, but that's another story altogether Smiley. There were a lot of array operations going on, so I was wondering whether to go with plain C arrays, or with vectors. I'm a C++ guy, I always use vectors when I can, but this time I was worried about performance. In the end I reminded myself of the old saying "don't optimize to early" and I implemented the thing with vectors. I ran the test app, and it was horribly slow. So I compiled with Release, it was a little better, but still slow. I almost started retooling my vector code, but decided against it and tried to run a profiler. Turns out that the most time by far was spend computing the Sigmoid function for the nodes. since I couldn't really change that, I looked at something else. The Sigmoid() function was called some 10mln times in my code (once for every node every time the net is evaluated, and then twice for every node in the learning phase, that for 100000 iterations... you get the point). I stuck an inline on the function, and performance was up tenfold.

    So I'm glad I did profile, and didn't waste time optimizing code that wasn't really the bottleneck at all!

    As a sidenote, I also tested this app using g++, and the VC2003 optimizer blows the G++ optimizer out of the water! Smiley
  • User profile image

    I couldn't agree more with the moral of the vector/array optimization story (above).

    You should never guess at what your performance bottlenecks are. I can't even count the times the dev team was sure that the problem was some one thing, then we profile, and see that it was something completely different. Often the problems aren't the major stuff, but little things.

    Don't waste your time, or let others, optimizing code without clear evidence of what is slow. (go buy TrueTime of Quantify!)



  • User profile image
    You could in fact have the same set of API's in a variety of library formats.  The tragedy is that although it's possible I think I can say definitively for Microsoft libraries that we've never actually done it before.

    There's always bizzare gotchas.

    The day that video was recorded I had actually just been trying to get some unmanaged DLLs to talk to one another and I'd ended up having to write code like this

    extern "C" _declspec(dllexport) void WINAPI Foo(...)

    And I thought to myself... nobody should have to type that crap.

    And it gets worse.  If you're doing dynamic binding to DLL's you don't even get compile time warnings if you get it wrong.  Ugh.  You should read some of the history on this stuff on Raymond Chen's Blog

    Even when we got it right it wasn't exactly easy.  Back in the good old days when I worked on C/C++ 7.0 and then Visual C++ 1.0 and we were delivering a perfectly nice MFC 1.0 and 2.0 but those guys were like Baskin Robbins 31 flavors.  Do you want unicode or not?  ODBC? COM?  DLL or static linking? 

    I'll have static linked unicode hold the mayo.

    There was a lot of secret sauce behind the scenes to make that work.  And if it didn't work quite right there was sauce all over the place.

    Other great libraries, I mean seriously really useful stuff, like say the XML DOM library or the OLEDB libraries.  They work great but try explaining to someone how to initialize it all from scratch.  Ooowwwieeee!!!  Thank goodness for samples. 

    I suspect that the situtaion is not all that different in the Linux world.  Back when I was linking for good old BSD 4.1 there was already library hell... bigger more robust libraries just magnify the problem.

    So, if you remove calling convention issues, and make it so that libraries have self-describing dynamic linkage you go a long way to eliminate friction.  That's a good thing.  But people then do think less about what library they are going to use.  Usually that's good too... who wants to live in a world where simple stuff is hard? 

    Cheers and thanks for watching,

  • User profile image
    You don't even need a .h file in .NET Smiley

Add Your 2 Cents