E2E: Erik Meijer and Burton Smith - Concurrency, Parallelism and Programming

Download this episode

Download Video


The great Burton Smith, Microsoft Technical Fellow and an international leader in high-performance computer architecture and programming languages for parallel computing joins functional programming purist and language design guru Erik Meijer to discuss several major themes of parallel computing and distributed programming. As always, you will get a lesson in history, present trends and future possibilities. This is simply an awesome and deeply wonderful conversation. Burton is a treasure.

Erik shows up for the conversation only after Burton begins to talk about a potential definition for functional programming. Right on queue, Erik arrives!

Burton will be presenting his thinking on parallel and concurrent programming at PDC09. He will also be a panelist on the Future of Programming panel (and Erik will be the panel moderator - you won't want to miss the panel if you are attending PDC!).



Available formats for this video:

Actual format may change based on video formats available and browser capability.

    The Discussion

    • User profile image

      Love these Charles!

    • User profile image

      Me too. I love this job!! Smiley

    • User profile image

      If you ever need an understudy Charles... Smiley

    • User profile image

      Come on Charles, you got to give me some time to sleep Smiley. Just when I think I can catch up on C9 videos to watch, you go and release another one I must watch Smiley.

    • User profile image

      Sorry, man! Smiley I think you will particularly enjoy this one.


      BTW, AWESOME job on the VS 2010 Learning Course , Jason.  Much thanks to you and your team.



    • User profile image

      This video was very enjoyable; lots of humor and insight Smiley


      I love the idea of ultra-cheap cross-core/processor communication that facilitates extremely fine-grained parallelism. Meanwhile maybe some form of complexity and strictness analysis will help determine sensible concurrency granularity given pure semantics.


      Quote of the talk: "dysfunctional programming" - a brilliant way to frame every other kind of programming. Not serious, just fun, heh

    • User profile image

      Always love the Burton videos.  They are right up there with the Beckman videos in the must watch category ... you wish they would just keep going for a few more hours. 

    • User profile image

      Good stuff.

      In terms of exceptions as values, at the 500 foot level, it would seem if Object had a new property to "hold" exception, the type system would/could just work.  You could use normal try/catch or not as needed.  In a message passing model, I would tend to think all non-void functions need to return a type even if that type is an exception. 

      var x = foo(0);          // foo returns exception inside Int object.
      Write(x.ToString()); // x.ToString() returns exception text inside object.
      if (x.IsException())   // Can test any object (including value types) for exception.
          Write("x holds exceptional value.");
      var y = x + 1;     // Statement exception on eval here because x "contains" exceptional x.
      return y;      // Return normal result.


      On "y = x + 1" does runtime throw or just return "exceptional" x.  How to handle void functions?

      Here is small token of my appreciation Erik:

      Generic Comment Image

    • User profile image

      you know, i dont think the awnser to charles question "will there be new languages" is that obvious.. arent java and c# really just c++ witch in turn is really just c? arent f# just ocaml witch in turn is ml? haskell and smalltalk are really old too.

      i wonder if any completly new languages that arent based on anything exsisting will emerge in the forseeable future. sure, the ones we have will continue to evolve and fork, but will there be a completly new general purpose language? the awnser is less obvious at least Smiley

    • User profile image

      Actually, it's quite obvious. C#, Java, C++, C are sugar coated assembler. Reasoning about assembler, even sugar coated is a lost cause. Making those languages into something that can be reasoned about at compile and especially run-time would be practically impossible because of long hairy legacy that those languages carry around.

      In order to run a program on a parallel hardware, run-time would have to reason about side effects to come up with some strategy to partition computational graph into work loads that have minimal interactions between each other.

      If many core processors will have cores of different capabilities (which seems to be the case), run-time reasoning and JIT will be a necessity.

      It seems like none of the existing imperative languages would survive transition to parallel era. Of course run-times are still be written in something that is sugar coated assembly, yet for general-purpose programming completely new languages would be required.

      Declarative and richly typed presumably.

      Also to the point of run-time reasoning and code generation, to provide fault tolerance computational graph might need to be re-evaluated if a computation node returns exceptional value or goes into non-termination state. That in theory would allow automatic remediation for run-away queries in databases and handling of non-responding services in the cloud (as well as mutating hardware - failed or hot plugged general and special purpose CPUs, failed or hot plugged memory and so on).

      It probably will take another 10 to 20 years to get it right, but it looks like that's where things are going.

    • User profile image

      I think the language style that you are referring to is akin to the Flow-Based Programming languages of component processes. Since FBP dates back to the 1970's it would seem to backup Al_'s assertion that what we think are new programming languages are just nice facades on older ideas...  http://en.wikipedia.org/wiki/Flow-based_programming

    • User profile image

      Right, and old ideas are just facades on even older ideas and so on recursively till the big bang Smiley

      In the end it will be about believes, whether one believe this or that language being "new" or not. Attempts to define "pure novelty" would end up nowhere.

    • User profile image

      On the subject of strict or linient evaluation.

      It seems that an adanced enough run-time can and should use both, based on the accumulated "knowledge" (stats) about workloads being executed.


      Expectation that something can be strictly evaluated in false in absolute sense, because each and every CPU instruction and/or memory read/write may fail because of faulty hardware. Yet, it can be statistically true. If hardware is somehow known to be 99.something% reliable, such assumption can be made safely (in statistical sense), otherwise nothing can be computed or done ever.

      (I believe that proponents of strict evaluation are stuck because they base their reasoning on incorrect assumptions without explicitly stating what those assumptions are, which is a known issue that plagued physics for centuries, and most likely still does)


      The same must apply to the algorithms as well. If algorithm is known to be predictable on a given workload (either statistically or by devine intervention of the mister human), it's OK to evaluate is strictly. If there is no prior knowledge, lazy evaluation is the way to go and please gather execution stats upon exit so it can be reused in the future evaluations/executions. And if it does not exit in the requested amount of time - abandon (preferrably kill first) the execution and and black list it (till the end of time or the next devine intervention).


      From 10000 feet it looks like a nice logical schema with a feed back loop, which is statistically a necessity for each and every successful eco system (observe the nature).


    • User profile image

      "I think the language style that you are referring to is akin to the Flow-Based Programming languages of component processes"


      It seems much of todays concurrent lineup (i.e. CCR, Axum, Erlan, TPL, functional programing, etc) have discovered or re-discovered the same things (i.e. black boxes w/ msgs).  At the base level, it seems this guy nailed it back in the 70s.  The right road seems to float around the FBP ideas.  Add hw support for efficient message passing (as Burton points out) and maybe even some kind of hw support for sw bounded queues, and things get interesting.  Add correct-by-construction language support (i.e. Axum and beyond) and it gets real interesting.  The syntax is not the important thing, it is the general model that must lead you down the correct path and makes the wrong path hard (i.e. the reverse of today).

    • User profile image



      I totally agree, but the CCR is special in the list that you present since if you look beyond the CCR, in either the MSRS or DSS/CCR Toolkit, you will find a very nice VPL IDE that really nails down the graphical ideas of FBP..


      Too bad that all this goodness from "BigTop" just slipped past most of the .NET world.. 


    • User profile image

      Indeed.  As Burton made clear, the hardest part of the many-core problem is figuring out how to successfully program, to compose, in a manner that makes all the newly gained power useful for users, who experience computing mostly through software abstractions. Let's use those cores, brothers and sisters.



    • User profile image

      Could these points be manifest in Google's Go langage? It appears to implement pretty much every issue raised. Funny on the timing.


      One final thing, I also find myself writing functional style in .net. Especially when doing recursion. Though I'm by no means a functional language programmer.


    • User profile image

      I don't quite follow the reason on the white board that Erik wrote.


      He's saying that if you have a function

      [code] F( x ) { return 13; }[/code]

      (so basically, F always returns 13, regardless of what you pass in)


      and then you call F like so:

      F( E )

      would it retrun 13?  One would say, "sure, because it doesn't matter what you pass in"

      But his point is that, what if (for instance), the parameter E throws an exception.  That means F doesn't return 13 (or doesn't even get called)


      That means, you cannot replace an arbitrary instance of "F( E )" with "13".


      But I don't understand.   This is supposed to be purely-functional.  If I say that F returns 13 regardless of the parameter, I would want (and may be expect) that the compiler would not bother evaluating the parameter E to begin with.


      That is, even if I were to call F( 1/0 ), I could make the argument that 13 should be returned because ultimately, it's about evaulating F, not evaluating the the parameter to F, whose own evaluation is there solely for the purpose of being passed to F.




    • User profile image

      That sounds right, but just suppose that F really was using x, then it would probably need to evaluate x and in that case the return type would be different if x threw an exception.

    Comments closed

    Comments have been closed since this content was published more than 30 days ago, but if you'd like to send us feedback you can Contact Us.