Ralph Trickey

Ralph Trickey Ralph Trickey

Niner since 2007


Thread Replies Latest activity

Article about kinect internals



  • E2E: Erik Meijer and Wes Dyer - Reactive Framework (Rx) Under the Hood 1 of 2

    Can you do me a favor and call it Linq to Events? I can explain that to my boss and co-workers, but Reactive Extensions is going to draw a bunch of blank looks. I know it's not as 'cool' as Rx, but it's a lot more friendly.


    Can we do some more deep dives like this?



  • Expert to Expert: Erik Meijer and Anders Hejlsberg - The Future of C#

    I'm going to be a heretic and flatly state that Anders is wrong about one thing, and I drank the kool-aid starting back with Turbo Pascal.

    We're going to see a /parallelize switch on the compiler. It may be C# 7+, but we're going to see it. I remember the arguments against garbage collection, against dynamic typing, etc.

    We will probably see 8+ cores on the user's desktop and 16+ cores on the developer's machine, and probably 16Gig, and that's enough to crunch one heck of a lot of graphs for a compiler. Most programs aren't the worst-case type of scenarios that people talk about, people still think linearly, so while I wouldn't expect the parallelize switch to work on Word or Excel, or any paint programs, I would expect it to be able to make enough use of parrallelism to make it twice as fast on a standard application as a single threaded program, even if it uses all 8+ cores to become twice as fast. It's going to be innefficient as anything, but it's still a 'free' 2x improvement in speed.

    Will the /parallelize switch become anything more that a gimmic to make old programs and casual games run faster? That I don't know. That's where my crystal ball fails me. Seeing what they're doing with the task parallel library, I wouldn't be too surprised if between that and the software transactional memory that doing something like running multiple copies in parrallel and simply throwing away the unused portions (somthing like what the CPU does) may not make it possible to do a lot more with the compiler and library to parrallelize than Anders thinks is possible. We also may see more fine-grained parallelism and SIMD type stuff built into the VMs/CPUs. On the other hand, I've got a classical CS background, understand compiler design, and remember the 1K machines, not those behemoth 64K machines that Anders was talking about. I know how difficult the problem is, but it only has to be solved once.

    My random 2c,

  • Visual Studio Debugger Tips & Tricks

      I enjoyed the talk, but I also wished you'd been able to get it in. I'm a native programmer switching to .Net.

      I'm not sure why they'd be any more 'weird' in .Net than they are under Native.

      As a games programmer, I've had several problems that I've had to troubleshoot in release mode because it takes too long to run things in debug mode. I assume that for .Net, I'd have to attach to the process after it starts to avoid running a debug build. I assume that a release build would inline the accessor, to the only optins available are hardware breakpoints and print statements Sad

      I've found them fantastically useful in debugging things like buffer overruns in native code. Even though that class of bugs isn't supposed to be there, I suspect that there is a new class of bugs related to multi-threading that are coming at us. Data breakpoints seem like an obvious way (to me) to troubleshoot these types of issues.

  • Rahul Patil: Complexities of Testing Concurrency

    Charles, great shows. Keep up the good work. Yes, I did learn about this stuff in College, but that was too many years ago and everything since then has been single threaded. I know in theory how to write code that is thread-safe, but... One thing that ,he mentioned that I think you missed is that while you've been covering the new work that's being done to take tasks and partition them over an unknown number of CPUs/Threads. That's fascinating and important stuff, but is only half the problem.

    The other part, is locks. How are they going to work in the brave new world? Are we OK with just using 'locks' for everything, or is that going to be too heavy a performance hit? When you add in multiple CPUs (including GPGPUs, and caching, etc. does this make locks too expensive? Are we going to be set up with needing to either take the performance hit of more comprehensive locks, or even knowing which type of lock needs to be held when? Are the Static analysis tools enough to guarantee that locks are acquired and released int eh same order,  or do we need new constructs to deal with these issues in the many-core world. I suspect that lock are a much bigger part of the problem/solution that I haven't really see talked about.

    Functional programming uses immutable objects and avoids the problem, but what are the rest of us supposed to do?