Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

sylvan sylvan
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Okay, I guess I my recollection confused that with the statement later where you said that it "doesn't scale" (49:40). Mea culpa. Anyway, that's unsupported too. At the end of the day you could always choose to write 100% of your code in the IO monad at which point you're no worse  (or better) off than an "effects-everywhere" language like C#.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Pretty sure you said explicitly that it wasn't general purpose in the video, at which point Eric protested wildly.

    Haskell was designed from the start to be general purpose, it even says so in the original documents from the early meetings. It was never intended to be some sort of domain-specific language. Indeed people are doing everything from designing hardware, to writing operating systems, to file systems, to 3D shooters, and web applications in it.

    One thing that should be pointed out here, is that the need for multiple languages is reduced if your main language is good at supporting EDSLs (Embedded Domain Specific Language). Haskell (and e.g. F#) does this via monads, other languages do it via e.g. macros, and it's very common to deal with half a dozen of them in any given app (in fact, doing IO itself can be seen as an EDSL, but things like STM, Parsers, non-determinism, dealing with XML, etc. are common too). That doesn't mean you will never need another language, just that it's less common. For example, Haskell people still generally use SQL, even though there are Database EDSLs in Haskell (similar to LINQ-to-SQL).
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    I didn't ask Eric or Joe because they didn't make the claims you did.

    Re: the compilation issue, I was talking about the whole compilation not just the compilation to IL - C# is jitted, so you can't compile it off line and statically link it to any random .o file you have. Bartok could do that, but in general that's a no go so far. This makes Haskell a lot easier to use in a lot of scenarios. For example, embedded software, or just general applications really (no need to distribute the .Net framework).

    "Writing hardware" doesn't mean "writing to hardware", it means writing actual hardware - i.e. desigining hardware circuits. See Lava for one Haskell library doing this, or VHDL and Verilog for domain specific languages for it.


    Haskell has a Data.Dynamic, which allows you to deal with dynamic data similar to how C# 4.0 does: it gives you a static type of "Dynamic" for values which have no known static type. It doesn't have any syntactic sugar for it like C# 4.0 does, but it doesn't need it to the same extent - see Parsec for an example of parser combinators in Haskell, the parser looks like you're just reading dynamic data and spitting out statically type counterparts. So the ability to write EDSLs really removes a lot of the need to deal with dynamic types at runtime. This doesn't work in every single case, clearly, but the point is that EDSLs can be used to deal with dynamic data in a way where the details of actually looking things up is hidden (sort of like how C# 4.0 has that interface you implement for method lookup).

    Strictly speaking Haskell can do anything C can do, as it's natively compiled (and even has a C backend on most compilers). You could definitely get it working on the Xbox 360 for example (I've been meaning to try to do that as I have an Xbox 360 devkit at work). It's not supported (i.e. a ready solution supplied by MS), but that's precisely what I'm arguing for so not really relevant - the point is that none of those things you mention pose any real problems for the language. On the flip side, writing an elegant EDSL in C# is practically impossible (this situation is not the same for F# though, which has Monads now).
    The point isn't to compare libraries (though you could use them as an example of something). I'm arguing precisely that MS needs to have a pure language with similar support, including libraries. The point is to compare how well the language itself does with a specific problem.
    As long as you have a C API to something (e.g. multi-touch API), getting it working in Haskell is probably easier than in C# (the foreign function interface is a lot nicer in Haskell IMO).

    Oh and being pure doesn't restrict what optimizations you can make, because you can choose to have local mutable state, or just bail out and do stuff in the IO monad, if you really need those optimizations. You're just forced to specify up front where this happens.
    Not being pure does restrict optimizations though, since many optimizations fail in the presence of mutable state. For example, merely writing to a pointer can cause a massive hit if the compiler isn't able to statically prove that none of the other pointers in scope doesn't alias the memory you just wrote too - if it's not able to do that, which in general it won't, then it needs to reload any data read from those pointers since the data held in registers may be stale. This is a simple example, there are lots of others. Take a look at the fusion/flattening transformations in DPH for example, they're absolutely crucial to make nested data parallel computations tractable, and they totally rely on the code being pure.


    I've never said Haskell is perfect, I'm saying it's better than C# in a lot of ways, and one important way is concurrency and parallelism. This discussion is being sidetracked from that, though, because I just feel it's important to refute some incorrect statements made by you and Charles claiming, on no evidence, that Haskell somehow isn't general purpose. It is.
    If I thought Haskell was perfect I wouldn't be advocating someone create a competitor to it now would I? I'm precisely saying that someone with pull needs to take the good bits from Haskell, and all the lessons learnt, and produce a new purely functional language and then sell it like no language has been sold before. I can give you list of things I feel need to be looked at, if you're really interested (could you?).
    I've never claimed that Haskell (nor any language) solves every problem, please don't put words in my mouth. And drop the ad-hominems too. Accusing me of being "religious" when you're the one who's arguing against a language you don't even know is a bit much.

    Bringing up popularity is not very convincing, IMO. Popularity is a function of a whole bunch of other things, language merit is a tiny part of it. It's mostly historical accidents (people don't use C because they think language research hasn't improved in the last 40 years - there are other factors that dominate).


    Charles, concurrency and parallelism may have been domain specific challenges five years ago. I'd argue that we're already past that point, and it'll only get worse - concurrency and parallelism are general challenges that will need to be adressed in any language claiming to be general purpose. C# is certainly making strides towards that, and I've already said that this is good stuff - certainly better than not doing anything, I just think there should be a "fully backed" MS alternative for those who are ready to accept the challenges of 5-10 years from now today.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Can you give an example of something it's not great at?

    Haskell is a general purpose language too, and you could argue it's more general purpose than C# (especially since C# can't easily be compiled - at least not with currently released comercial products; I'm aware of Bartok). Can you generate x86 assembly on the fly into a buffer and then jump into it and start executing in C#? Well, in Haskell you can and it's a breeze (see the Harpy library). Can you use C# to write hardware? How about reactive animation? How about financial modelling? Or parser combinators? Or automatically differentiating numerical functions? And even if you manage to answer "yes" to any of those (which you can), please do compare the amount of work required to get it running, and how well the abstractions work.

    Don't just say that Haskell isn't general purpose (while C# is) unless you can back it up.

    Also, you're very close to the turing tarpit of just saying "well all languages are equivalent, so it doesn't matter which one we use", but by that same reasoning we should all use assembly. 
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    I agree that Haskell isn't a panacea. I never said it was. The main thing I would want from it is purity with some way of abstracting over effects (and ideally being able to write your own for EDSLs). This is precisely why I think it would be useful for someone with deep pockets to take a stab at creating a new purely functional programming language knowing what we now know. Haskell is about 20 years old, so it certainly has some warts accumulated like any language of that age - I think a benevolent dictator is needed to take the main lessons from Haskell and lift them over onto a clean slate - possibly with provisions to make it less scary for newbies (e.g. C style syntax).

    Haskell 98 is fairly outdated and almost never what anyone means when they say Haskell (the new standard is underway). At minimum you need to included Concurrent Haskell to get what peopel these days are using, but looking at things like STM and NDP you really see just how far ahead it is. 
    So taking that into account, I'm not sure I understand your issues with "one top-level I/O loop"? What's wrong with having N top-level IO-loops (forkIO) communicating with messages? Then each of those could have lots of fine grained task-based concurrency (using Control.Parallel.Strategies) and even nested data parallelism for even more fine grained parallelism. This pure code could even use (provably) local mutable state using the ST monad. I don't see how Erlang or Occam offer anything that you can't do in Haskell (though you may want to provide a constrained monad that only provides certain IO-operations, like forking, but this is trivial to do, to be more Erlang-like).

    EDIT: In fact, this is one area where I think Haskell really shines. The composition of fine grained and course grained parallelism. You have threads and message passing (or shared state with STM) for coarse grained. Then you have a bunch of pure code executed on top of that (with local mutable state). This pure code can be parallelised in a task-based way using Control.Parallel.Strategies (e.g. PLINQ-style, or Cilk-style, but safe because it's all pure). Then for incredibly fine grained parallelism, there's now work going on with nested data parallelism. This give massive scalability (GPU-like) while still being flexible enough (Nested! The compiler does the magic of flattening it for you) to offer a decent programming model. And then in the future, when Haskell takes over the world, you would have graph-reduction hardware (see Reduceron) and just run Haskell on that for massive instruction-level parallelism. So Haskell (or H#) could truly offer scalable parallelism at every granularity.
    So I agree that getting parallelism at all levels is crucial, but I don't see how Haskell fails in any way in this area, in fact I think it excels.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    If you're going to wait until a miracle language appears you'll have a long wait ahead of you. Meanwhile we have languages that are sufficiently more sophisticated (than C# et al) in this area that they almost seem miraculous if you squint your eyes. Why waste that massive benefit because we haven't yet found a magical cure for all problems?

    It's not about there being a silver bullet, it's about recognizing fairly uncontroversial facts, such as "ad hoc side effects have no business in a manycore world", and making sure our languages don't violate that. 
    You don't have to have the perfect answer to every single detail, but it's a good start to get the basic substrate right - and I think it's a pretty safe bet that whatever other technology emerges, side effects will need to be tamed, so let's start there.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    That's still just a marketing issue. If your boss doesn't want you to learn "H#", then clearly more marketing is needed until he sends you off on a Microsoft H# seminar.

    Yes newbies are free to use Haskell, but as I tried to explain earlier switching to purely functional programming is a fairly big shift, and I think the chances of it happening are much greater if a big company like MS pushes it. If the shift doesn't happen, then the costs involved will be very high - for everyone, not just Microsoft.

    I do use Haskell over C#, and I do think that Haskell is a much more practical and pragmatic language for concurrency than anything .Net has to offer, including F# which is still not pure (again, you can't be 90% virgin, you either are or you aren't). I just think widespread adoption (which will benefit us all) of this has to be pushed by a big company, and MS could be it. If not, well I guess I'll have to accept that .Net isn't especially suitable for concurrency and avoid it for those scenarios. Unfortunately, those scenarios will increase in frequency in the future.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    I still think you're presenting a false dichotomy. It's not about "beautiful" vs "getting the job done". It's about "the best way to get the job done in 5-10 years when we'll have hundreds of cores". The only thing stopping the "average joe programmer" from spending the effort to learn something radically different is marketing.

    You keep implying that purity is somehow at odds with pragmaticism and getting the job done and there's just no evidence of that. It's at odds with legacy code, but that's it. MS successfully got people to use .Net rather than their legacy systems, so it's clearly possible to get over that hump. Yes, it'll be easier to sell an extension to C#, but at what point do you step back and say "hang on, C# is now more complicated than Haskell for a newbie, maybe we should have a language that does this from the ground up rather than tack things on to an existing substrate that doesn't quite fit"? 
    C# with purity annotations is definitely better than nothing at all (because as I said, we need this to be pushed by a big entity), but having the option of another language (C# would still exist, clearly!) that's pure from the start would be even better.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    But by "practical" you're just talking about marketing, not any technical issues. The issue isn't about using the actual language, that's clearly doable an perfectly practical, the issue is convincing people that they should make the investment to learn something new. That might be difficult, and costly. I guess my point is that making something that's only "90% pure" (which is really a nonsensical concept, like "90% virgin", Smiley) may be less costly up front (because you can extend C# into something that's more messy and complicated in an absolute sense, but offers a smaller relative learning curve because people already know what we have now), but those "10% problems" will add up over the coming decades to dwarf the one-time cost of a paradigm shift.

    So I think it's a false dichotomy that we're choosing between "perfect but unusable" and "imperfect but useful".

    A small company wouldn't have an option, they would have to go for an evolutionary approach, but a company like Microsoft does have the option of whole-sale paradigm shift. They'd have seminars, and books, and visual studio integration, and maybe even get hardware partners to produce Reduceron-style co-processors for graph-reduction. MS can throw their weight and marketing behind it to make it happen, but very few other companies could.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    One of the best C9 episodes yet, IMO. Get two people in a room and let them discuss stuff without fear of getting too technical.

    I'm with Eric on this one personally. My main beef with Duffy's side of the debate (which is essentially just saying "Yes, we agree in principle, but we don't want to force everyone to learn something completely new so we're going to try to tack it on to what we already have") is the following:

    1. I think the cost of not having a "robust" solution to this issue is underestimated. IMO the cost of forcing every man, woman and child with a compiler to learn a Haskell-like purely functional language is peanuts compared to the cost of letting them loose in a manycore world without the adequate tools. The perceived cost may be different, but that's what marketing is for.

    2. The complexity of adding pure "bubbles" to an impure language quickly mounts to the point where the overall system is FAR more complicated than Haskell is. Think of all the different "kinds" of effects annotations you would need for a pure subset (transitive const, immutable, locally mutable but not externally visible, etc.). There are many ways of being impure, but only one way of being pure. Pure default makes life easier for both compiler writers and users.

    3. It's tempting to be "nice" to existing .Net developers by not forcing them to learn something new, but I wonder if you're really helping them in the long run.  Think about Hoare's nullable pointers - it was added on a whim because it was easy, and now he refers to it as his Billion Dollar Mistake. Sometimes doing the hard thing is what will end up making life easier for people. Related to 1. above, but the main point is "don't be nice for the sake of being nice".

    4. Very few entities could pull off kick starting a paradigm shift. If not microsoft, then who? Nobody is my guess.


    That said, the disagreement is far smaller than it may appear. Pretty much everyone agrees that in a perfect world we'd all just switch to a Haskell-like language, the question is if the real-world cost of doing that outweighs the long term cost of not doing it. Some say no (me included), some say yes.

    As a suggestion for future interviews, I'd recommend heading back to the UK and checking up with SPJ et al, they're working on some really cool parallelism stuff in Haskell (specifically nested data parallelism).

    Oh, and about SPJs graph that you had on C9 a while back, I don't actually think that he said Haskell was useless, he said Haskell started out being useless (circa 1990) and then progressed to become more and more useful (while remaining safe), whereas other language started out being useful but unsafe, and are now adding features for becoming more safe. So the idea is that both prongs of attacks are nearing the same Nirvana, but with different strategies.