Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

Download

Right click “Save as…”

Joe Duffy spends a lot of time thinking about the future of concurrent programming and parallelism. In his role as Lead Developer in the Parallel Computing Platform team, Joe is the creator of PLINQ and a key contributor to many of the managed (.NET) concurrency incubations happening in and around his broader team. He's also an author (check out his latest book, Concurrent Programming on Windows)

You've met Joe many times before on C9 and the concurrency topic should be quite familiar to you by now (There's a lot of very innovative thinking going on in the parallel computing platform team (and it's not just about the managed world, as you know)).

We've spent a lot time discussing library-based approaches to enabling parallelism in a readily understanable, predictable, safe and scalable way for .NET programmers. We've also spent time on language level approaches to the problem (new constructs in C# that make it easier to compose in a semi-functional way (lamdas, LINQ, etc) or purely in a hybrid-functional way in F# or with experimental DSLs like Maestro).

Erik Meijer, Expert to Expert host, programming language designer and one of the high priests of the lamda calculus  spends a great deal of time thinking about the problem of software's capability to scale effectively (as efficiently, safely, and as composable as possible) in the Many-Core age. So, we add Joe + Erik and we get many excellent, insightful questions and answers. Of course the notion of side-effects plays a big role here and we even debate the merits of Haskell in the real world. This is a great conversation.  It goes deep, but not so far into the rabbit hole that you won't be able to find your way back. Smiley

Enjoy!

Tags:

Follow the Discussion

  • Christian Liensbergerlittleguru <3 Seattle

    Niiiiiiiiiiiiiiiiiiiice! Thanks guys.

    Joe has a point: Haskell only looks complicated. Probably if the language would look more like C/C++, C# or JAVA people would probably use it more and not get scared simply by checking it out. The stuff often is only named in a way that people get scared.

    I have already had a similar experience with trying to explain Lambda Expressions in C#. They are not that complicated to get but people are scared when you tell them the term "Lambda Expression(s)". It's as if the name activates a blocking system in the brain.

  • Jonathan MerriweatherCyonix Me
    That discussion was great, this is what C9 is about. It's great having Joe on again, his problem domain is an extremely interesting one. If you get the chance I want to know about Joe's new unnamed language that he said he'd been working on.

    This video followed the C9 formula for success (in my opinion):

    1. Passionate discussion
    2. Interesting topic
    3. Reference and/or sneak-peak to future direction
    4. A conversation not a press release

    Note: Although this video followed the above pattern for success, anything with Erik Meijer/Brian Beckman == win
  • I second Cyonix. I just ordered Joe's book Smiley
  • Allan LindqvistaL_ Kinect ftw
    oooo awsome :O i've been dying to get an update from those guys Smiley

    --midwatch edit (17:24)--
    now this is why i love channel 9.. right here. cutting edge stuff. stuff you're not even sure you can talk about yet. thats what i love about channel9 Smiley just wanted to call that out.. that should be the motto of channel9 Wink

    --midwatch edit2 (40:40)--
    erics talks about how the dlr is alot about talking to legacy. i dont think thats entirely true.. diffrent problems requre diffrent things of the language and one thing the dlr really enables imo is to have a wider choice of approaches to those problems Smiley i dont think its only for old code, i may want to have things more static in one part of the system and more dynamic in another Smiley
  • One of the best C9 episodes yet, IMO. Get two people in a room and let them discuss stuff without fear of getting too technical.

    I'm with Eric on this one personally. My main beef with Duffy's side of the debate (which is essentially just saying "Yes, we agree in principle, but we don't want to force everyone to learn something completely new so we're going to try to tack it on to what we already have") is the following:

    1. I think the cost of not having a "robust" solution to this issue is underestimated. IMO the cost of forcing every man, woman and child with a compiler to learn a Haskell-like purely functional language is peanuts compared to the cost of letting them loose in a manycore world without the adequate tools. The perceived cost may be different, but that's what marketing is for.

    2. The complexity of adding pure "bubbles" to an impure language quickly mounts to the point where the overall system is FAR more complicated than Haskell is. Think of all the different "kinds" of effects annotations you would need for a pure subset (transitive const, immutable, locally mutable but not externally visible, etc.). There are many ways of being impure, but only one way of being pure. Pure default makes life easier for both compiler writers and users.

    3. It's tempting to be "nice" to existing .Net developers by not forcing them to learn something new, but I wonder if you're really helping them in the long run.  Think about Hoare's nullable pointers - it was added on a whim because it was easy, and now he refers to it as his Billion Dollar Mistake. Sometimes doing the hard thing is what will end up making life easier for people. Related to 1. above, but the main point is "don't be nice for the sake of being nice".

    4. Very few entities could pull off kick starting a paradigm shift. If not microsoft, then who? Nobody is my guess.


    That said, the disagreement is far smaller than it may appear. Pretty much everyone agrees that in a perfect world we'd all just switch to a Haskell-like language, the question is if the real-world cost of doing that outweighs the long term cost of not doing it. Some say no (me included), some say yes.

    As a suggestion for future interviews, I'd recommend heading back to the UK and checking up with SPJ et al, they're working on some really cool parallelism stuff in Haskell (specifically nested data parallelism).

    Oh, and about SPJs graph that you had on C9 a while back, I don't actually think that he said Haskell was useless, he said Haskell started out being useless (circa 1990) and then progressed to become more and more useful (while remaining safe), whereas other language started out being useful but unsafe, and are now adding features for becoming more safe. So the idea is that both prongs of attacks are nearing the same Nirvana, but with different strategies.
  • CharlesCharles Welcome Change
    The great Simon Peyton-Jones will appear on C9 the next time he's in town (Redmond, WA, USA). And, yes, Erik Meijer will be there too! Smiley That should be fun.

    I really enjoyed this particular conversation because it really surfaced the vision behind Expert to Expert!

    Thank you Erik and Joe! We will meet again.
    C
  • Allan LindqvistaL_ Kinect ftw
    hehe i gotta go with duffy on this Wink
    imo even the most perfect system is useless if no one uses it [because its too hard or whatever] Smiley

    purity is great but what i like about duffy and that whole team is that they are still pragmatic, they want people to have use for their stuff, not just create something thats perfectly pure beautiful but not useable in a practical sense Smiley

    maybe we wont be able to create completly linear scaling programs with out errors while using side affects, but i think we'll always have errors in our programs, no matter how pure they are Smiley

    however, there is no right awnser to this cunandrum imo, just diffrent oppinions Smiley if nothing else, you could make a whole lot of money with your haskell programs when they kick my .net programs butt on the million core machines of tomorrow Smiley
  • But by "practical" you're just talking about marketing, not any technical issues. The issue isn't about using the actual language, that's clearly doable an perfectly practical, the issue is convincing people that they should make the investment to learn something new. That might be difficult, and costly. I guess my point is that making something that's only "90% pure" (which is really a nonsensical concept, like "90% virgin", Smiley) may be less costly up front (because you can extend C# into something that's more messy and complicated in an absolute sense, but offers a smaller relative learning curve because people already know what we have now), but those "10% problems" will add up over the coming decades to dwarf the one-time cost of a paradigm shift.

    So I think it's a false dichotomy that we're choosing between "perfect but unusable" and "imperfect but useful".

    A small company wouldn't have an option, they would have to go for an evolutionary approach, but a company like Microsoft does have the option of whole-sale paradigm shift. They'd have seminars, and books, and visual studio integration, and maybe even get hardware partners to produce Reduceron-style co-processors for graph-reduction. MS can throw their weight and marketing behind it to make it happen, but very few other companies could.
  • Allan LindqvistaL_ Kinect ftw
    no im not talking about marketing.. im talking about average joe programer who has a problem to solve within a set timeframe. he wants a solution to his problem first and formost, not something that is completly pure and "beautiful" 

    purity adds a bunch of constrains on programs (thats the whole point) to make absolutly double sure that they are "correct" but people are able to create things that work even without these constraints Smiley people are writing all kinds of useful stuff in non pure languages. so purity is not a requirement, its an aid imo.

    you mention small companies. well its in those companies that .net really gives an advantage because its such a boost in productivity. if microsfot was to make  the entire runtime and bcl pure, those companies would have to look elsewhare for that productivity boost

    i dont think c# should become haskel because we already have haskell, if we want to use haskell, use haskell Smiley 

    you talk about the "coming decades". yes that is probobly true that in 50 years time we'll open a vs2008 project and sigh loudly but im abolutly sure that we would do that even if we switch to haskell right now Smiley i just dont think haskell or functunal purity is the end-all-perfect solution for everything. new paradigms will always come.. Smiley
  • Bent Rasmussenexoteric stuck in a loop, for a while
    I prefer videos that talk about, not now, not now.next, but now.next.next. It's really the strategic research that's interesting. Short-term stuff can be learned from blogs and books, but knowing what directions the research and development is going is very interesting. This Expert-2-Expert interview fits the bill. We get an insight into the status quo, what's comming and further out what's being investigated as potential solutions. And even (although only shortly) the idea of "what would you do if you could start all over (with the CLR)". Another excellent interview.
  • Allan LindqvistaL_ Kinect ftw

    great, great interview as always.. i dont want to take a break at all Big Smile  

    it touches on alot of things ive been burning cycles in my puny brain on..
    the pull between more static and more dynamic, both are beeing called "the solution" but they pull in so diffrent directions and c# is in the middle..

    i think what anders said at pdc and other times is so very true. there is no one correct model. sometimes we need to be dynamic, sometimes we want o be really static.. there just isnt a single solution, a single model. so should we accept this and try making using diffrent models easier or should we make it more difficult Smiley

    i know you dont agree sylvan but i think its better to get 80%-90% there and get high adoption that to get 100% and make 99% of programmers relearn 99% of what they know Smiley we cant just ignore the people.. that is unfortunate Smiley

    banans must be one of the more paralell fruits btw..

  • I have just ordered Joes book. He obviously is an industry leader in his field. Your right about the .net being used in small companies like the one I work for. We took the business decision to design our financial package in .net due to the customisation offered by a .net environment.
  • Are the MP3 and MP4 versions going to be made available soon?
  • CharlesCharles Welcome Change
    Good catch! We will make these available soon!
    C
  • Thanks Charles. Any chance the mp3 could be added to Building-Channel-9-Inside-EvNet-Part-1 as well?
  • Awesome! Joe's Concurrency book was one of the very few comp.sci books I instantly re-read a second time cover to cover last year. Brilliant stuff and I've been waving it under the noses of anyone who codes at work.
  • I still think you're presenting a false dichotomy. It's not about "beautiful" vs "getting the job done". It's about "the best way to get the job done in 5-10 years when we'll have hundreds of cores". The only thing stopping the "average joe programmer" from spending the effort to learn something radically different is marketing.

    You keep implying that purity is somehow at odds with pragmaticism and getting the job done and there's just no evidence of that. It's at odds with legacy code, but that's it. MS successfully got people to use .Net rather than their legacy systems, so it's clearly possible to get over that hump. Yes, it'll be easier to sell an extension to C#, but at what point do you step back and say "hang on, C# is now more complicated than Haskell for a newbie, maybe we should have a language that does this from the ground up rather than tack things on to an existing substrate that doesn't quite fit"? 
    C# with purity annotations is definitely better than nothing at all (because as I said, we need this to be pushed by a big entity), but having the option of another language (C# would still exist, clearly!) that's pure from the start would be even better.
  • Allan LindqvistaL_ Kinect ftw

    its time, not marketing that stopping joe programmer from learning from learning something radically diffrent.. marketing may help convince people to spend that time, but in a lot of cases its just not possible.. if the boss man tells you that he wants the serverapp back up you cannot tell him/her that you gotta go relearn a large portion of your programming skills first Smiley yes i know thats an extreme example but time is always an issue in industrial programming

    Adding things to c# will make it more complex, this is true. but when its "too" complex is a subjective thing. when c# does become "too" complex for someone, that person will move to another language (hopefully still .net) Smiley if c# becomes more complicated than haskell for newbies, arent the newbies free to use haskell?

    My point is that you already do have a language that is pure from the start, haskell, so why not use that if thats what you prefer.. perhaps the best thing would be to have a haskell compiler for the clr (with its own pure bcl). also, theres f#, its a lot more pure than c# and its beeing added as first class language, isnt that sort of what your calling out for?

    the lines of programming is pretty blurry, sometimes you need some purity and sometimes you really need to be dynamic, c#4, as a true general purpose language (i dont agree with eric that haskell is GP, not in practice anyway), tries to find a middle road. its not optimal for dynamic programming and its not optimal for pure functional programming, if you want more dynamic/more pure, use a diffrent language Smiley
    i dont think there can be one language that pleases everyone and trying to force people to do this or that just doesnt work. they have to come willingliy.. i think exposing people to functual constructs through c#4 will entice them to try purer languages like f# or haskell where its applicable and that cant be bad right?

  • sylvan: I agree with your original post, that the cost of learning something more suitable for many-core development outweighs the cost of the messes that people will produce with C#.... The problem is not that Microsoft are promoting parallel extensions over miracle language 'x', the problem is that 'x' does not exist *and* no one really knows what 'x' is. People have some ideas what 'x' may look like but they are just that: ideas. It doesn't exist because no one in the field really knows what it looks like and how to solve the (many) problems. There are some proposed solutions, most of which while having some positive points look just like what they are: guesses. I defy anyone to say here even what colour the solution will be (yes that's a joke).

  • can we have the video download for iPhone as well?
  • Grant BoyleGrantB What the hell are we supposed to use man? Harsh language?
    Very nice work gents.

    I'm still bullish on message passing in the long term for a number of reasons. I find the Erlang world view (if not the details of the language) very appealing. And I systems like Singularity make me even more optimistic about it. I look at IOCP and see the same sort of thing going on.

    It might be interesting for a future video to "go deep" on how synchronization primitives work (right down to the opcodes). I'd like to know if we'll ever get to the point where hardware designers can no longer maintain cache coherency system wide.

    It might also be interesting to look into how Second Life's new Mono based script engine handles massive parallelism. You could also touch on Erlang style processes and the way the CCR can leverage iterators.

    I need a coffee.
  • > I'd like to know if we'll ever get to the point where hardware designers can no longer maintain cache coherency system wide.

    Don't you think we have a big enough problem without that rearing its head Smiley  I suppose NUMA may mitigate that ... assuming the overlying system can manage it of course.
  • That's still just a marketing issue. If your boss doesn't want you to learn "H#", then clearly more marketing is needed until he sends you off on a Microsoft H# seminar.

    Yes newbies are free to use Haskell, but as I tried to explain earlier switching to purely functional programming is a fairly big shift, and I think the chances of it happening are much greater if a big company like MS pushes it. If the shift doesn't happen, then the costs involved will be very high - for everyone, not just Microsoft.

    I do use Haskell over C#, and I do think that Haskell is a much more practical and pragmatic language for concurrency than anything .Net has to offer, including F# which is still not pure (again, you can't be 90% virgin, you either are or you aren't). I just think widespread adoption (which will benefit us all) of this has to be pushed by a big company, and MS could be it. If not, well I guess I'll have to accept that .Net isn't especially suitable for concurrency and avoid it for those scenarios. Unfortunately, those scenarios will increase in frequency in the future.
  • If you're going to wait until a miracle language appears you'll have a long wait ahead of you. Meanwhile we have languages that are sufficiently more sophisticated (than C# et al) in this area that they almost seem miraculous if you squint your eyes. Why waste that massive benefit because we haven't yet found a magical cure for all problems?

    It's not about there being a silver bullet, it's about recognizing fairly uncontroversial facts, such as "ad hoc side effects have no business in a manycore world", and making sure our languages don't violate that. 
    You don't have to have the perfect answer to every single detail, but it's a good start to get the basic substrate right - and I think it's a pretty safe bet that whatever other technology emerges, side effects will need to be tamed, so let's start there.
  • I'm glad you enjoyed the discussion.

    I'm reading over all of your comments, and I see many great points being made.

    One that I'd like to deposit for your consideration.  Haskell is an ideal language _in certain contexts_.  For some people, those contexts are important enough to learn an entirely new language.  Highly parallel programs may be one such context, where the safety moving wholesale to Haskell brings is worth the cost of switching.  Even if that were true, _most_ .NET developers are not currently salivating for parallelism.  In 5 years?  Maybe.  But not today.  So as a broad blanket statement, it is safe to say that the perceived cost of switching to Haskell is far higher than the perceived benefit for the bulk of the development community.  This is why an incemental, move-select-parts-of-your-program-over-to-the-safe-world-piecemeal, strategy is so attractive.

    In addition to that, I mentioned in the video that Haskell is not a panacea.  It has many interesting ideas, but some that I consider to be debatable for the .NET community at large.  Algebraic data types mixed with structural pattern matching -- with type classes for polymorphism -- are useful for a certain class of programming, but telling a whole community of object-oriented developers to switch overnight will not only result in religious clashes, but is probably just plain wrong anyway.  There is a plethora of shared knowledge (e.g., in patterns -- see GoF), collateral (books, articles, training), and frameworks that Windows developers rely on each day, which are strongly tied to the C++-family of languages.  Moreover, I don't believe vanilla Haskell (98) has solved _all_ of the problems associated with composition of separate agents that are performing I/O.  The "one top-level I/O loop" style of programming doesn't scale beyond one coarse-grained agent.  For that, something more like Occam or Erlang is neeeded, and this is crucial to address in order to enable composition of fine-grain with coarse-grain concurrency.

    Food for thought, I hope.

    Best Regards,
    ---joe

  • I agree that Haskell isn't a panacea. I never said it was. The main thing I would want from it is purity with some way of abstracting over effects (and ideally being able to write your own for EDSLs). This is precisely why I think it would be useful for someone with deep pockets to take a stab at creating a new purely functional programming language knowing what we now know. Haskell is about 20 years old, so it certainly has some warts accumulated like any language of that age - I think a benevolent dictator is needed to take the main lessons from Haskell and lift them over onto a clean slate - possibly with provisions to make it less scary for newbies (e.g. C style syntax).

    Haskell 98 is fairly outdated and almost never what anyone means when they say Haskell (the new standard is underway). At minimum you need to included Concurrent Haskell to get what peopel these days are using, but looking at things like STM and NDP you really see just how far ahead it is. 
    So taking that into account, I'm not sure I understand your issues with "one top-level I/O loop"? What's wrong with having N top-level IO-loops (forkIO) communicating with messages? Then each of those could have lots of fine grained task-based concurrency (using Control.Parallel.Strategies) and even nested data parallelism for even more fine grained parallelism. This pure code could even use (provably) local mutable state using the ST monad. I don't see how Erlang or Occam offer anything that you can't do in Haskell (though you may want to provide a constrained monad that only provides certain IO-operations, like forking, but this is trivial to do, to be more Erlang-like).

    EDIT: In fact, this is one area where I think Haskell really shines. The composition of fine grained and course grained parallelism. You have threads and message passing (or shared state with STM) for coarse grained. Then you have a bunch of pure code executed on top of that (with local mutable state). This pure code can be parallelised in a task-based way using Control.Parallel.Strategies (e.g. PLINQ-style, or Cilk-style, but safe because it's all pure). Then for incredibly fine grained parallelism, there's now work going on with nested data parallelism. This give massive scalability (GPU-like) while still being flexible enough (Nested! The compiler does the magic of flattening it for you) to offer a decent programming model. And then in the future, when Haskell takes over the world, you would have graph-reduction hardware (see Reduceron) and just run Haskell on that for massive instruction-level parallelism. So Haskell (or H#) could truly offer scalable parallelism at every granularity.
    So I agree that getting parallelism at all levels is crucial, but I don't see how Haskell fails in any way in this area, in fact I think it excels.
  • I agree that there are many useful extensions to Haskell that solve particualr problems.  (This is why I was careful to say "vanilla Haskell (98)".)  Concurrent Haskell, Data Parallel Haskell, and STM TVars are all great examples.  It is still unclear to me whether a new langauge ought to dictate a more constrained model for constructing programs, or whether providing a great collection of independent (but composable) packages is better.  The former is typically needed to reach a broad developer audience, but is at odds with some of the most fundamental language design principles that I strongly believe in (e.g., the C++ model of helping developers to do the "right" thing but not preventing them from doing the "wrong" thing).

    In any case, we probably agree on one thing: in the long-term, a new language is needed.  We're just debating execution strategies to get there.  I firmly believe we need a stepping stone from here to there, but that "there" is still a very important place to end up.

    ---joe
  • CharlesCharles Welcome Change
    That would be the MP4 version. It will show up soon
    C
  • Allan LindqvistaL_ Kinect ftw

    haskell might be great for paralellism but like joe said, its not great for everything.  c# is really good for a very wide amout of stuff and i think that means more to most general purpose programmers that excelence in a perticular area Smiley

    As far as new language goes.. well.. yeah a better language is always great, but i think its easy to over state the importance of languages and understate the importance of philosophies and understanding of what one is doing.. alot of these problems really transcends computing in my view Smiley the world is massibly paralell after all.

    what i mean is that its prefectly possible to create perfectly pure-no-side-effects applications in any language, its the philosophy thats the core. no language will allways produce perfectly scalabe programs. we must not forget that we, the humans, are the ones who wants to express something and the language is just a means (all be it, an important one) to do that Smiley

  • Can you give an example of something it's not great at?

    Haskell is a general purpose language too, and you could argue it's more general purpose than C# (especially since C# can't easily be compiled - at least not with currently released comercial products; I'm aware of Bartok). Can you generate x86 assembly on the fly into a buffer and then jump into it and start executing in C#? Well, in Haskell you can and it's a breeze (see the Harpy library). Can you use C# to write hardware? How about reactive animation? How about financial modelling? Or parser combinators? Or automatically differentiating numerical functions? And even if you manage to answer "yes" to any of those (which you can), please do compare the amount of work required to get it running, and how well the abstractions work.

    Don't just say that Haskell isn't general purpose (while C# is) unless you can back it up.

    Also, you're very close to the turing tarpit of just saying "well all languages are equivalent, so it doesn't matter which one we use", but by that same reasoning we should all use assembly. 
  • stevo_stevo_ Human after all
    Why not have something new, not .NET, which is based on what was learnt from .NET, so an updated static type system (perhaps one that can better enable a ~dynamic lang), an intermediate language, native interop, GC.. but base it from functional immutability and native / .net interop.

    Wasn't .NET a revolution? it seems to have done well, I don't think you need expect developers to give up their investment in .net code, just as .NET was harnessed and interoped with their native 'investments' (plus I bet companys would actually WANT to 'port' their code up for the advances it gives).


    oh and get nullability right.. etc

    But really, why not? it won't sell?
  • Hi sylvan

    I wasn't suggesting nothing is done or people do nothing. After all fully parallel applications have been possible for a long time (I have even written some). The problem is not 'can it be done' but 'how do we make it easier'.  Your original point was that it would be better to teach people something new rather than (say) relying on C# to provide the necessary concurrency. Sure it would be good to teach programmers a new paradigm that really helps solve the concurrency issues -- I am just sure we have much idea as to what it is.
  • Allan LindqvistaL_ Kinect ftw
    sure, though i gotta wonder why you dont choose to ask joe or eric that question..
    its not great for anything where the static type of something you need to talk to isnt known.
    beeing completly pure also restricts what kind of optimizations you can make. all those guarantees about isolation just does not come for free.
    basicaly every arguments that is used against the static:ness of c# is even more applicable to haskell

    you can use c#/.net to write
    web apps,
    ria apps,
    desktop apps,
    xbox games,
    mobile apps and 
    embedded apps
    all without having to write your own runtime and without using obscure 3rd party libraries. is that true for haskell?

    its interesting that you ask me to "compare the amount of work required to get it running" because that doesnt seem to be a requirement for you.. just how much work would it be to create a graphically rich, multi touch application in haskell? id image you'd have to re-invent a whole bunch of things already present in c#/.net

    "C# can't easily be compiled" not sure what you're talking about.. the c# compiler is included in the framework but surely you must know that
    "Can you generate x86 assembly on the fly into a buffer and then jump into it and start executing in C#?" well, c# is jitted so no. but you can generate IL with AssemblyBuilder/Reflection.Emit. however i dont see that as a very general purpose or command thing to do
    "Can you use C# to write hardware?" yes? pinvoke/CreateFile?
    "How about reactive animation?" from what i understand, that pretty much what depedency properties do

    im not saying it doesnt matter what language we use. im saying that language is not the most important thing, the understanding of the concept is. you seem to think that if we all used haskell, everything would just work, and thats not true.
    no language will let you escape the need to understand the problem and if you do understand the problem you truly can implement it in almost any language (even assembler). its just a matter of preference what language you actually use.

    the debate between static/dynamic has been going on for what? 40-50 years? knowing that i dont understand how anyone can think there is a simple solution that will just work in any situation.. thats just religion..

    the cold hard fact is that haskell is a minority langunage. if it was sooo great, sooo easy to learn, soo general purpose, soo free from problems, it would be more widely used. its not, so it isnt. blaming marketing is pretty lame because haskell has been around for ~20 years, and still hasnt become mainstream. thats not just a marketing problem..

    im not married to c#. it has problems and can be made better. its not great for everything.
    the fact that you cant admit that about haskell just makes you look silly, and worse than that, you're hindering the progress of haskell by not admitting its flaws..
  • CharlesCharles Welcome Change
    C# is a general purpose programming language (very general purpose). Concurrency and parallelism represent specifc domain challenges. I don't think that we should expect developers to compose systems using only one tool. That seems rather strange, no? If you want to write software that calculates non-linear differential equations as part of solving complex problems of fluid dynamics, well, you probably want to use a tool that a) efficiently and effectively enables you to do so without compromising speed and accuracy and b) you want your highly parallelized system to scale out. You don't use a general purpose language confined to a single very large runtime + library environment...

    I think the question here (like we discussed in this interview) is:

    Does Microsoft create a new general purpose language and runtime that specifically addresses the concurreny problem while providing engineers with high level abstractions that make programming parallel systems easy, effective, safe and scalable?

    Haskell is not the panacea as Joe and even Erik have said (and so has Simon Peyton Jones).

    In the end, general purpose programmers who have evolved in this imperative, sequential world must learn to think differently, first and foremost. Getting your minds around completely foreign concepts is a great approach to this. I think it's time for a series of conversations and tutorials on Channel 9 based on the theme of Expand Your Mind. First up: Functional Thinking. Stay tuned.

    C
  • I didn't ask Eric or Joe because they didn't make the claims you did.

    Re: the compilation issue, I was talking about the whole compilation not just the compilation to IL - C# is jitted, so you can't compile it off line and statically link it to any random .o file you have. Bartok could do that, but in general that's a no go so far. This makes Haskell a lot easier to use in a lot of scenarios. For example, embedded software, or just general applications really (no need to distribute the .Net framework).

    "Writing hardware" doesn't mean "writing to hardware", it means writing actual hardware - i.e. desigining hardware circuits. See Lava for one Haskell library doing this, or VHDL and Verilog for domain specific languages for it.


    Haskell has a Data.Dynamic, which allows you to deal with dynamic data similar to how C# 4.0 does: it gives you a static type of "Dynamic" for values which have no known static type. It doesn't have any syntactic sugar for it like C# 4.0 does, but it doesn't need it to the same extent - see Parsec for an example of parser combinators in Haskell, the parser looks like you're just reading dynamic data and spitting out statically type counterparts. So the ability to write EDSLs really removes a lot of the need to deal with dynamic types at runtime. This doesn't work in every single case, clearly, but the point is that EDSLs can be used to deal with dynamic data in a way where the details of actually looking things up is hidden (sort of like how C# 4.0 has that interface you implement for method lookup).

    Strictly speaking Haskell can do anything C can do, as it's natively compiled (and even has a C backend on most compilers). You could definitely get it working on the Xbox 360 for example (I've been meaning to try to do that as I have an Xbox 360 devkit at work). It's not supported (i.e. a ready solution supplied by MS), but that's precisely what I'm arguing for so not really relevant - the point is that none of those things you mention pose any real problems for the language. On the flip side, writing an elegant EDSL in C# is practically impossible (this situation is not the same for F# though, which has Monads now).
    The point isn't to compare libraries (though you could use them as an example of something). I'm arguing precisely that MS needs to have a pure language with similar support, including libraries. The point is to compare how well the language itself does with a specific problem.
    As long as you have a C API to something (e.g. multi-touch API), getting it working in Haskell is probably easier than in C# (the foreign function interface is a lot nicer in Haskell IMO).

    Oh and being pure doesn't restrict what optimizations you can make, because you can choose to have local mutable state, or just bail out and do stuff in the IO monad, if you really need those optimizations. You're just forced to specify up front where this happens.
    Not being pure does restrict optimizations though, since many optimizations fail in the presence of mutable state. For example, merely writing to a pointer can cause a massive hit if the compiler isn't able to statically prove that none of the other pointers in scope doesn't alias the memory you just wrote too - if it's not able to do that, which in general it won't, then it needs to reload any data read from those pointers since the data held in registers may be stale. This is a simple example, there are lots of others. Take a look at the fusion/flattening transformations in DPH for example, they're absolutely crucial to make nested data parallel computations tractable, and they totally rely on the code being pure.


    I've never said Haskell is perfect, I'm saying it's better than C# in a lot of ways, and one important way is concurrency and parallelism. This discussion is being sidetracked from that, though, because I just feel it's important to refute some incorrect statements made by you and Charles claiming, on no evidence, that Haskell somehow isn't general purpose. It is.
    If I thought Haskell was perfect I wouldn't be advocating someone create a competitor to it now would I? I'm precisely saying that someone with pull needs to take the good bits from Haskell, and all the lessons learnt, and produce a new purely functional language and then sell it like no language has been sold before. I can give you list of things I feel need to be looked at, if you're really interested (could you?).
    I've never claimed that Haskell (nor any language) solves every problem, please don't put words in my mouth. And drop the ad-hominems too. Accusing me of being "religious" when you're the one who's arguing against a language you don't even know is a bit much.

    Bringing up popularity is not very convincing, IMO. Popularity is a function of a whole bunch of other things, language merit is a tiny part of it. It's mostly historical accidents (people don't use C because they think language research hasn't improved in the last 40 years - there are other factors that dominate).


    Charles, concurrency and parallelism may have been domain specific challenges five years ago. I'd argue that we're already past that point, and it'll only get worse - concurrency and parallelism are general challenges that will need to be adressed in any language claiming to be general purpose. C# is certainly making strides towards that, and I've already said that this is good stuff - certainly better than not doing anything, I just think there should be a "fully backed" MS alternative for those who are ready to accept the challenges of 5-10 years from now today.
  • CharlesCharles Welcome Change
    To be clear, I never said that Haskell is not general purpose. I did imply that it is NOT as general purpose as C#. And I think I'm right here. C# is designed to support a huge number of scenarios. Haskell was not designed to. I don't think even the Haskell creators would say that Haskell is as general purpose in nature as something like C#. But this is all moot. The issue is whether or not C# should become a one-size-fits-all GPL or if .NET is used the way it was intended from the very beginning: use the right tool for the job. As long as the tool is supported (CLI compliant), then it will work in the .NET world.

    I mean, would it make sense to write entire applications in a DSL like Maestro? Of course not. You would build the parts of the system that are not parallelizable (they don't need to be.....) in the tool you are comfortable with (C#) and use other tools (like Maestro, for example, or F#) to solve specific problems (like breaking a complex data computation into pieces and running them in parallel).

    I love this type of discourse and I do not claim that my opinions are any more than my opinions (and they can change based on conversational data).

    Keep on posting,
    C
  • Pretty sure you said explicitly that it wasn't general purpose in the video, at which point Eric protested wildly.

    Haskell was designed from the start to be general purpose, it even says so in the original documents from the early meetings. It was never intended to be some sort of domain-specific language. Indeed people are doing everything from designing hardware, to writing operating systems, to file systems, to 3D shooters, and web applications in it.

    One thing that should be pointed out here, is that the need for multiple languages is reduced if your main language is good at supporting EDSLs (Embedded Domain Specific Language). Haskell (and e.g. F#) does this via monads, other languages do it via e.g. macros, and it's very common to deal with half a dozen of them in any given app (in fact, doing IO itself can be seen as an EDSL, but things like STM, Parsers, non-determinism, dealing with XML, etc. are common too). That doesn't mean you will never need another language, just that it's less common. For example, Haskell people still generally use SQL, even though there are Database EDSLs in Haskell (similar to LINQ-to-SQL).
  • CharlesCharles Welcome Change
    I asked Erik "Is Haskell really general purpose?". He then responded as one would expect a high priest of the lamda calculus to respond Smiley

    I didn't say "Haskell is not general purpose." There is a difference between asking a question and stating a position.

    I think you're right about a language that supports EDSLs. That's something we talked about in conversation with Anders at JAOO 2008.
    C
  • Okay, I guess I my recollection confused that with the statement later where you said that it "doesn't scale" (49:40). Mea culpa. Anyway, that's unsupported too. At the end of the day you could always choose to write 100% of your code in the IO monad at which point you're no worse  (or better) off than an "effects-everywhere" language like C#.
  • CharlesCharles Welcome Change
    Indeed. What I meant was Haskell (thinking in the Haskell way, more so than the syntax of the Haskell language) won't scale in the sense that gp programmers won't be able, en masse, to make the concpetual switch in a reasonable amount of time. For some gp developers, the shift in thinking is just too high of a bar (and in some cases, unnecessary). How many gp developers out there understand what a monad is? Most developers (by most I mean greater than 50%) do not think monadically, not explicitly, anyway. Most gp developers (at least on our platform) are also not functional thinkers (functional in the functional programming sense).

    This is a hard problem that requires changes in several layers of the software stack, including thinking. Someday, perhaps we'll have a new language, let's call it Nirvana (remember that SPJ brief interview? Note where "Nirvana" lives on the Useful vs Safe (where safe-ness is measured in how side-effectual the language is -> unsafe = very side-effectual, safe = not side effectual...) graph), that can be all things to all scenarios (probably by virtue of its native support for EDSLs and it's highly generic runtime). Who knows.

    C
  • I think we should give people more credit. Most people probably didn't understand basic algebra before they were taught it either. I really don't think Haskell is that big of a deal compared to something like OOP with all the nuances it has. It's just that it doesn't share too much with OOP so it's not as easy to learn as "yet another OOP language", but if you think in terms of the overall effort to go from zero to OOP or zero to Haskell, I'd probably expect Haskell to win.
    I was a TA at university teaching Haskell, and people seemed to have a much harder time picking up Java than Haskell if they didn't already know any programming at all (and initially, people who hadn't programmed at all did better than people who had - they just had less baggage - but after a few weeks the experienced students did start to find ways of reusing existing knowledge).

    It's a challenge yes, but like I said earlier it's mostly marketing. You need to convince people to make the effort, and help them once they're convinced. But I don't really see it as that big of a problem.
  • William Staceystaceyw Before C# there was darkness...
    I agree with that Charles.  Take this Haskell/GUI sample from wikipedia:
    gui :: IO ()
    gui = do
      f <- frame [ text := "Event Handling" ]
      st <- staticText f [ text := "You haven\'t clicked the button yet." ]
      b <- button f [ text := "Click me!"
                    , on command := set st [ text := "You have clicked the button!" ]
                    ]
      set f [ layout := column 25 [ widget st, widget b ] ]

    Ooo my...I don't see myself ever being comfortable mentally with that and being able to write (or read) my windows apps that way.  Not saying it will never happen, but I just don't feel it (and I am pretty open minded).  Moreover, would Intellisense be able to work with this?

    I am interested to hear more thoughts on linear types and more thoughts on types that could have attributes to dictate concurrency behaviors and useage rights (i.e. read only, read/write, reads for these callers, etc.) with compiler enforcement.

    I have to watch this show again.  Good show guys.

    A random related side:  My 4-way 2.66GHz HP is noticiably slower then my 2-way 3GHz HP in the Vista look and feel category.  I know there is many factors.  But more cores is not really helping the user UI experience more then faster cores in my experience (from the clock on the wall).  This is because the "user" things are bound by the UI loop (including the shell).  So we can not forget about the "fast" path while reasearching many-core solutions.  Hardware and software vendors still need to push each core and not think adding cores is the ultimate solution to the problem.   


  • Uh, I don't see what's so bad about that? This is pretty much what you'd write in any language. Is it just that the syntax is unfamiliar? That's a fairly shallow reason to dismiss something forever.

    I really don't see how that sample is in any way more complicated than the equivalent C# code. You create a bunch of widgets, and set event handlers (specified inline here, but they could obviously be named and declared separately if they were larger), what's the problem, surely this is very familiar if you've used any imperative GUI toolkit? 
    Honestly I thought you were being sarcastic at first, poking fun at Charles by taking something that's very similar to standard imperative programming to illustrate how simple it is, but the rest of the post indicates you're not.

    And yes, Haskell is statically typed (that's sort of the point), so auto-completion works fine. There is a mode for Haskell in visual studio, but sadly it's not maintained anymore...
  • William Staceystaceyw Before C# there was darkness...
    I was careful not to say "ever".  I just don't find myself "thinking" in it any time soon.   I am not defending c#, nor am I dissing haskell.  However, I have seen many times people say they can "think" in c# and it just rolls off the brain.  I am in that crowd.  I could think in c# after a couple days.  This can not be down played as this ability (IMO) makes a language useable and popular more then anything else.  If it is not approachable right away, it tends to favor a limited crowd - for right or wrong.  I don't believe in absolutes or language realigions as I have used too many of them and seen them come and go (I admit I loved Cobol and RPG back in the day).  They have to be very natural and readable.   I would rather see more lines for clear intent, then a terse alternative any day.  The "dot" in .Net is also lost here from what I can see, and that would be a big deal.

    There is also an issue which you pointed out.  All these 3rd party libriaries with limited or no support that come and go.  That was same reason I finally left my beloved Perl.  It turns into a treasure hunt all the time looking for and downloading libraries and extentions and patches and version mismatches, etc.  That part could be fixed if MS made it a .net language however.  I say, use what you like to write in.  The syntax question however is OT to the larger issue of the right Model for concurrency.  I think the truth lies somewhere between Erlang, CCR, Linq, and Agents (not Agent Smith) as I think Joe was leading to.

    BTW.  Anyone *here actually write a non-trivial Windows application written in Haskell and have a URL to sample?  Would be interested in looking at it.  How about Eric?
  • Could explain specifically what about that snippet makes it hard to "think" about it? You're just saying kinda vaguely that you can't find yourself "thinking" in it, but I don't understand what specifically is posing a barrier to  you.
    The way I see it that snippet is, barring syntax, pretty much identical to what you would write in any imperative language to do GUIs. I could accept that you find it harder to use a certain style of programming, e.g. functional vs imperative, but you chose an entirely imperative snippet of Haskell! I don't even see the difference between this and C# other than the extremely superficial (different syntax).

    You create objects imperatively.
    You set properties (e.g. text) and event handlers (e.g. on command) on those objects.

    How is this a different way to think about things than C#? I just don't understand what specifically you have an issue with.
  • The one question:

        " If all .net languages eventually compile down to CIL, then why not just focus on analysing the CIL code for parallel optimisations and then literally restructure the code on that common level? (rather than finding a common model for all high level paradigms) "
  • The simple answer is that when you get down to the CIL a lot of the context has been lost. It is true, of course, that the CIL *is* a representation of whatever you wrote but the translation is one way and the higher level information is lost. For example, this post could be read one letter at a time but without assembling the letters into words the content will be lost.  It may be possible to get some speedup by examining the CIL, CPUs do that right now on the 'real' machine code by doing out-of-order and speculative execution.
  • CharlesCharles Welcome Change

    This is why the functional programming model is so appealing from a parallelization point of view: the higher level meaning is not lost, but in plain site and reliably computable (surprise-free, with controlled(controllable) side effects). Determining how to split up code to run in parallel, for example, is explicitly expressed in the higher level thinking (expressed in code), therefore it can be accurately interpreted and recomposed in the compilation process.

    C

  • Allan LindqvistaL_ Kinect ftw
    thats exactly the point im trying to make (all though i might not be so good at it)
    diffrent tools are good for diffrent things, c# id good at alot of thing while say, haskell is really great for parallelization Smiley

    another point you make that ive also tried to put forth is that the functional thinking is the real gold and that it can be applied across alsmost all languages, purely functional or not
  • Allan LindqvistaL_ Kinect ftw

    all im saying is that c# and .net is more general purpose given whats available today. and really, thats all that counts. if may be possible to write xbox games in haskell but as you say, it (along with alot of other things) its not supported. and if its not supported at least to some extent, it might as well be impossible im most buisness scenarios.

    we differ in what we mean by general purpose. the examples you give, compileing to native and writing hardware are general purpose in a very formal way, but they are very specific applications and not sovething that you do very often as a regular programmer. what i mean is, given all programers, very few do what you describe.. you see, most programmers dont care if its the "language" or if its "api". if its possible with an api, its possible with the language. i know thats not formally correct, but thats the way people reason..
    in that regard, haskell is less general purpose as it has a smaller api. it does have the potential to be more general purpose, but in practice it isnt.

    you are right though. i dont know much about haskell. im sorry if you feel ive been putting word in your mouth, that was not my intent.

    one thing you dont seem to realize however is that just because someone cant (or has the time) to prove to you that something is difficult, that doesnt mean it isnt difficult. no i dont care enough about winning a discussion at a forum to create a whole app proving some point about c# or haskell. but that doesnt mean that i or stacy or charels are lieing when we say we feel something is unintuative.
    thats how we feel, theres nothing you can do about that Smiley

    also, where are these awsome 3dshooters and webapps written in haskell? love to see some links Smiley

  • William Staceystaceyw Before C# there was darkness...

    Specifically, I don't like the " <- ". I don't like loosing the "dot". I don't like lack of explicit statement terminator.

    But this is all subjective. Some people love the Mona Lisa, other people see a dude with a wig on.

  • CharlesCharles Welcome Change
     "It's as if the name activates a blocking system in the brain." 

    If an executing conceptual framework inside the brain is not able to understand certain incoming information, then it shouldn't block. Smiley

    The message is simply ignored, asynchronously or sequentially (it doesn't really matter; not in terms of understanding a foreign concept), as expected; the reaction is involuntary, initially. Therefore, the ability for blocking to occur is there, but not highly probable(this is a human brain, after all).

    As the language is learned so to is the understanding it realizes, sure. But nothing stops you from investigating a foreign concept directly, bypassing the requirement of mastering any of its associated higher level and formalized expressive abstractions (languages). In fact, this should be the default learning pattern: understand the concepts first (or concurrently, if you can), then language-level expression becomes an exercise in mastering language-specific patterns and syntax. In this sense, a programming language is really just a thin wrapper around a core conceptual framework.
     
    It's rather hard to interpret or express that which you don't understand.

    Hypothesis: You can think functionally and formally express side effects in a composable manner without programming in a functional language.
     
    The expression of parallel intentions, for example, can be achieved in an imperative way, implicitly: the functional-ness is abstracted away by the tooling, e.g., Parallel Extensions for .NET, LINQ, Lamdas, TPL, CCR and the runtime (CLR) or in a functional way, explicitly (F#, Haskell, etc). In the latter case there is no indirection overhead, but if you've mastered an imperative toolset, then you should still be able to exploit functional concepts successfully in both a composable and side effectual way in the langauge you already "speak" fluently.

    This is exactly what Anders, Joe, et al are thinking with respect to C# evolution and the various parallel .NET libraries in production and incubation. 

    I think Erik and I are going to pay a visit to Anders soon. Indeed, we'll dig into the thinking behind this thinking as part of a future episode of Expert to Expert. Sound good?

    Keep on posting,
    C
  • Allan LindqvistaL_ Kinect ftw

    sounds great charels Smiley
    as ive said before (though in a far less eloquent manner than you) i truly belive that the concepts, the thinking, is where its at.

    learning language x is in a sense a byproduct of learning the concepts behind x imo.. in a way the language is a means to express the concepts.
     you can in some cases learn the language without grasping the concepts, but that makes you a lousy programmer Smiley

    looking forward to that anders video and more conceptual goodness Smiley

  • Have you really tried "tasting" the syntax (ie worked with it for a couple of weeks)? If not, how can you know you do not like it? Wink

    Even though there are religious wars about syntax, I guess most people would adapt pretty easily to another syntax if they really were convinced there was a substantial benefit in using another language.
    If our brains didn't adapt easily to weird syntaxes, we wouldn't have seen all those C-inspired syntaxes and Perl would have remained a sick fantasy inside only one man's brain. Smiley
  • RE: “CIL *is* a representation of whatever you wrote but the translation is one way and the higher level information is lost”

    I’m not entirely convinced that the metadata relating to the high level thinking/interpretation is even required to solve this kind of problem? (i.e. mapping serial/sequential code to an equivalent and more parallelized version). In some packages such as Simulink (Mathworks) and BlockBuilder for Simulink (Maplesoft) the model of the algorithm/application is analysed and algebraically simplified into a form that is far more efficient, but equivalent. As, at the low level, the entire application can ultimately be described as a system of equations, why would this abstract principle even require metadata to reform the solution? Only the basic procedural/functional subsets/partitions need be maintained.

    In your example of a string of characters, even this can be considered a single value as is commonly used in databases and cryptography (see gmp mp bignum library applications for references). So on the base, all of this can simply be considered one large algebraic problem? Or do you disagree?

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.