Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

joedu joedu
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    I agree that there are many useful extensions to Haskell that solve particualr problems.  (This is why I was careful to say "vanilla Haskell (98)".)  Concurrent Haskell, Data Parallel Haskell, and STM TVars are all great examples.  It is still unclear to me whether a new langauge ought to dictate a more constrained model for constructing programs, or whether providing a great collection of independent (but composable) packages is better.  The former is typically needed to reach a broad developer audience, but is at odds with some of the most fundamental language design principles that I strongly believe in (e.g., the C++ model of helping developers to do the "right" thing but not preventing them from doing the "wrong" thing).

    In any case, we probably agree on one thing: in the long-term, a new language is needed.  We're just debating execution strategies to get there.  I firmly believe we need a stepping stone from here to there, but that "there" is still a very important place to end up.

    ---joe
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    I'm glad you enjoyed the discussion.

    I'm reading over all of your comments, and I see many great points being made.

    One that I'd like to deposit for your consideration.  Haskell is an ideal language _in certain contexts_.  For some people, those contexts are important enough to learn an entirely new language.  Highly parallel programs may be one such context, where the safety moving wholesale to Haskell brings is worth the cost of switching.  Even if that were true, _most_ .NET developers are not currently salivating for parallelism.  In 5 years?  Maybe.  But not today.  So as a broad blanket statement, it is safe to say that the perceived cost of switching to Haskell is far higher than the perceived benefit for the bulk of the development community.  This is why an incemental, move-select-parts-of-your-program-over-to-the-safe-world-piecemeal, strategy is so attractive.

    In addition to that, I mentioned in the video that Haskell is not a panacea.  It has many interesting ideas, but some that I consider to be debatable for the .NET community at large.  Algebraic data types mixed with structural pattern matching -- with type classes for polymorphism -- are useful for a certain class of programming, but telling a whole community of object-oriented developers to switch overnight will not only result in religious clashes, but is probably just plain wrong anyway.  There is a plethora of shared knowledge (e.g., in patterns -- see GoF), collateral (books, articles, training), and frameworks that Windows developers rely on each day, which are strongly tied to the C++-family of languages.  Moreover, I don't believe vanilla Haskell (98) has solved _all_ of the problems associated with composition of separate agents that are performing I/O.  The "one top-level I/O loop" style of programming doesn't scale beyond one coarse-grained agent.  For that, something more like Occam or Erlang is neeeded, and this is crucial to address in order to enable composition of fine-grain with coarse-grain concurrency.

    Food for thought, I hope.

    Best Regards,
    ---joe

  • Using the Parallel Extensions to the .NET Framework

    Type systems, isolation, immutability, ... ?  I know not of what you speak.  Wink

    ---joe
    http://www.bluebytesoftware.com/blog/
  • Inside Parallel Extensions for .NET 2008 CTP Part 1

    Anders was very instrumental in getting Parallel Extensions off the ground and designed right.  He's still involved regularly on hard design problems, but is a busy guy and works on a lot of things across the company.

    ---joe
  • Joe Duffy, Huseyin Yildiz, Daan Leijen, Stephen Toub - Parallel Extensions: Inside the Task Parallel

    littleguru,

    Your proposed syntax relies on the 1st pass of SEH, but can be written directly in IL or VB (since they support filters).  C# doesn't support them and, to be honest, I'm glad they don't.  We did consider this model to make AggregateExceptions more palatable, but for various reasons we don't think it would make a huge difference.  Moreover, the 2-pass model of SEH is problematic and so we would prefer not to embellish it.

    I should restate a point from the video: we encourage that developers to the best of their ability prevent exceptions from leaking across parallel boundaries.  Life simply remains a lot easier.  Once the crossing is possible, you need to deal with AggregateExceptions which is a bit like stepping through a wormhole:  you end up in a completely different part of the universe with little chance of getting back to your origin.

    The real issue is that with one-level deep examples like the one you show, you can certainly figure out how to pick out the exceptions you care about, handle them, etc.  We even offer the Handle API for this:

    try {
        ... parallel code ...
    } catch (AggregateException ae) {
        ae.Handle(delegate(Exception e) {
            if (e is FooException) {
                ... handle it ...
               return true;
            }
            return false;
        });
    }

    If, after running the delegate on all exceptions, there are any for which the delegate returned 'false' (i.e. unhandled), Handle rethrows a new AggregateException.  I admit, this code is a tad ugly, but even with 1st pass support you'd have to do something like this.  (Unless SEH knew to deliver only the exceptions that were chosen in the 1st pass selection criteria, which would require yet more machinery.)  But the issue is, what if you handle some FooExceptions, but leave some BarExceptions in there?  Again, those up the callstack will see AggregateExceptions and will need to have known to write the whacky code I show above.

    All of this is really to say that AggregateExceptions are fundamentally very different.  Exceptions in current programming languages are, for better or for worse, very single-threaded in nature.  They assume a linear, crawlable callstack, with try/catch blocks that are equipped to handle a single exception at a time.  I can't say I'm terribly happy with where we are, but I can say I think it's the best we can do right now given the current world of SEH.

    ---joe
  • Joe Duffy, Huseyin Yildiz, Daan Leijen, Stephen Toub - Parallel Extensions: Inside the Task Parallel

    evildictaitor,

    I do think Daan overstated the point (perhaps intentionally) about automatic/implicit parallelism.  It is true that many kinds of computations can be automatically run in parallel with little-to-no input from the developer.  When might this be possible?  When a computation is guaranteed not to have side-effects and thread-affinity.

    This already commonly applies to specialized frameworks and domain-specific languages.  Big hammer APIs like parsing an XML document or compressing a large stream of data also immediately come to mind.  Functional programming as a broader class of automatically parallelizable computations is an interesting one, but is not a silver bullet.  Mostly-functional languages are more popular than purely-functional ones; F# and LISP, for example, permit "silent" side-effects buried within otherwise pure computations, which means you can't really rely on their absence anywhere.

    Haskell and Miranda are two examples from a very small set of purely functional languages, where all "silent" imperative effects are disallowed, but for certain type system accomodations (monads), in which implicit parallelism is possible.  This allows you to at least know when parallelism might be dangerous, and it's the exception rather than the rule.  But even here, many real-world programs are constrained by data and control dependence.  You might be interested in John DeTreville's brief case study on this fact: http://lambda-the-ultimate.org/node/1948.

    Nevertheless, implicit and automatic parallelism are clearly of interest to researchers in the field.  I think what Daan was trying to say is that we're still a few years away from having a more general solution.  Between now and then, however, I would expect to see some specialized frameworks providing this; heck, just look at MATLAB and SQL for examples where this has already succeeded.

    Regards,
    ---joe
  • Programming in the Age of Concurrency - Anders Hejlsberg and Joe Duffy: Concurrent Programming with

    Hi Jaime,
    There is an overload of Parallel.For whose 'body' lambda is passed a ParallelState object.  This object offers a 'Stop' method which is effectively the same as 'break' in a sequential loop.  So for example, your code would look something like (changes highlighted):

    System.Threading.Parallel.For(1, maximumIterations_, 1, (dd, state) =>
    {
       s = trapzd(function, lower, upper, dd);

       if (Math.Abs(s - olds) < tolerance_ * Math.Abs(olds))
       {
          exeeded =
    true;
          state.Stop();
          
    return;
       }
       olds = s;
    });


    I didn't try to compile this, but it should work and do what you are looking for.  Take care,
    ---joe

  • Programming in the Age of Concurrency - Anders Hejlsberg and Joe Duffy: Concurrent Programming with

    Now, about the more general issue of shared state.  Judah hit the nail right on the head in his response.  We do not (currently) reject programs due to reliance on shared state.  LINQ tends to lead programmers down a more functional programming style so the problem is less pervasive (though still there) in PLINQ.

    Please take a look at http://www.bluebytesoftware.com/blog/2007/09/15/ParallelFXMSDNMagArticles.aspx for some more details on what we call "parallelism blockers."  This includes shared state, thread affinity, and slight changes in exception behavior.  Our story here is not completely ironed out, at least not ironed out enough to describe to everybody right now.  When it is, you can be sure we'll be back here on Channel9 to discuss it.

    Thanks for all of the great feedback.  Keep it coming.

    ---joe

  • Programming in the Age of Concurrency - Anders Hejlsberg and Joe Duffy: Concurrent Programming with

    esoteric,

    I'm glad to hear you agree with our direction.  As to whether TPL is built on top of the existing thread-pool, it is currently not.  But we know that programs will be written that use both in the same process, both moving forward and when considering legacy apps, and thus there must be some resource management cooperation in the final solution.  Nothing is baked enough to discuss, but once it is you can bet we'll be looking for feedback on the approach.

    ---joe
  • Programming in the Age of Concurrency - Anders Hejlsberg and Joe Duffy: Concurrent Programming with

    PerfectPhase,

    Yes, these technologies are written entirely in managed code, and run on top of the stock CLR.  While this is true today, it's of course possible, like any .NET Framework class library, that we'll pursue opportunities for tighter integration with the runtime as the libraries are further developed.

    ---joe