sylvan

Back to Profile: sylvan

Comments

  • Axum Published! Tutorial: Building your first Axum application

    This is pretty cool, but I think the semantics are overly complicated. I couldn't say that I know of a better way of doing it off hand, but I feel that there *must* be some way of making this simpler. As it stands writing agents still seems to be quite painful and clumsy, and something you would avoid doing up front, and instead do as an afterthought once you realise you need it. I think it's critical that writing agents should be as "light weight" as possible so that people write *all* their code using agents not because they necessarily believe they need them, but because they're the most convenient way of getting stuff done even when running on a single-threaded machine.

    For example, there seems to be two main ways of interacting with an agent, either by just passing messages and reading from the channels, or by using request-reply ports if you want to be able to send off multiple requests and then get the reponse back while keeping track of which response belongs to which request. It seems to me that this duplication is unecessary. If you want to send multiple requests couldn't you just be required to use multiple agents, one for each "transaction" (associating a result with a given request is then trivial)? If they need to share state you could use a domain, right? I've only briefly looked at it but it does seem that the request-reply ports just complicate things and aren't actually necessary.
    Also, I think first-class tuples will be very important for this, as you tend to want to make quick ad-hoc groupings of data all the time when sending and receiving messages.

    The semantics and syntax of this needs to be simplified a lot to make it easier to use, it still seems that you spend far too much time and screen real-estate dealing with the details of coordination, rather than your algorithm.
  • Erik Meijer and Matthew Podwysocki - Perspectives on Functional Programming

    There is a very important difference, and it's that reference types don't prevent you from calling a method on something that's null. 
    In haskell, if you try to pass a Maybe String to a function that accepts a String, you'll get a type error. In C# every reference typ can be null, which means that a function that takes a string may have to make do with a null instead. There's no way of saying that you do not want to accept a null reference to a method (and get static checking for that) because there is no notion of "non-nullable references" in C#.

    It seems that a lot of people, including Tony Hoare who invented null pointers and members of the C# design team IIRC, now agree that "nullable by default" was a mistake, and that regular  references should not be allowed to take a null value, instead you'd have to explicitly annotate a type with a ? (or something) to indicate that this reference may be null (and there would be some syntactic construct for "unpacking" a T? into two branches, one which gets passed a T and one which has to deal with what to do if the T? was null). 
    To bad that changing this would probably be way to cumbersome considering existing code (e.g. the entire framework!), but there's definitely value in being able to eliminate (statically) any chance of null pointer exceptions by just splitting the two concepts "reference" and "nullable" into two parts rather than conflating them into a single concept.
  • Expert to Expert: Erik Meijer and Anders Hejlsberg - The Future of C#

    So I was going to play with the .Net 4.0 CTP and went through the huge hassle of downloading 14 separate files etc. etc., but when I actually load it up in Virtual PC the windows server installation on it asks me to activate! Is there some time restriction on the CTP or something or do I need to manually type in some key (I've looked, couldn't find one)?

  • Expert to Expert: Inside Concurrent Basic (CB)

    Don't worry, it's not a problem on your end. I've had some codec issues here lately affecting all sorts of videos.
  • Expert to Expert: Inside Concurrent Basic (CB)

    I don't understand why the "handler" has to be a separate function. Seems highly redundant. I mean look at the example, "CasTakeAndPut ... Take, Put". Why do we need a named function here? WHen else would it be called? Why not just do what Polyphonic C# did?

    Actually I don't mind separating the handler (Polyphonic C# did get pretty long function headers because you specified them all "in line" with the handler statement), actually, but I don't see why it needs to be named. Also, seems a bit weird how the inputs for the various channels gets mapped to the parameter list of the handler - if you have ten of them taking different numbers of argument (including zero) of different types, then what does the parameter list for your handler look like and how long does it take you to get that right on average? Why not something like:

    Asynchronous Put( ByVal s As String)
    Synchronous Take() As String

    When Put( ByVal s As String ), Take() As String
       return s
    End When

    This removes the rendundant function name (and Function keyword), as well as making it obvious where each parameter comes from. In fact, we may possibly omit the type in the "When" clause since it's already declared and just say "Put(s)"? So you'd basically specify all your channels up front first, giving the types and any other modifiers (like access), and then just a short and sweet "when" clause at the end. Seems pretty clean to me:

    Asynchronous Put( ByVal s As String)
    Synchronous Take() As String

    When Take, Put(s)
       return s
    End When

    I think this idea is very promising, but I do think the syntax here is unecessarily clumsy... Unless someone can explain why we need all that extra stuff? Caveat: The video broke about half way through for me so maybe there's a really good motivation for this later on?
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Could explain specifically what about that snippet makes it hard to "think" about it? You're just saying kinda vaguely that you can't find yourself "thinking" in it, but I don't understand what specifically is posing a barrier to  you.
    The way I see it that snippet is, barring syntax, pretty much identical to what you would write in any imperative language to do GUIs. I could accept that you find it harder to use a certain style of programming, e.g. functional vs imperative, but you chose an entirely imperative snippet of Haskell! I don't even see the difference between this and C# other than the extremely superficial (different syntax).

    You create objects imperatively.
    You set properties (e.g. text) and event handlers (e.g. on command) on those objects.

    How is this a different way to think about things than C#? I just don't understand what specifically you have an issue with.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Uh, I don't see what's so bad about that? This is pretty much what you'd write in any language. Is it just that the syntax is unfamiliar? That's a fairly shallow reason to dismiss something forever.

    I really don't see how that sample is in any way more complicated than the equivalent C# code. You create a bunch of widgets, and set event handlers (specified inline here, but they could obviously be named and declared separately if they were larger), what's the problem, surely this is very familiar if you've used any imperative GUI toolkit? 
    Honestly I thought you were being sarcastic at first, poking fun at Charles by taking something that's very similar to standard imperative programming to illustrate how simple it is, but the rest of the post indicates you're not.

    And yes, Haskell is statically typed (that's sort of the point), so auto-completion works fine. There is a mode for Haskell in visual studio, but sadly it's not maintained anymore...
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    I think we should give people more credit. Most people probably didn't understand basic algebra before they were taught it either. I really don't think Haskell is that big of a deal compared to something like OOP with all the nuances it has. It's just that it doesn't share too much with OOP so it's not as easy to learn as "yet another OOP language", but if you think in terms of the overall effort to go from zero to OOP or zero to Haskell, I'd probably expect Haskell to win.
    I was a TA at university teaching Haskell, and people seemed to have a much harder time picking up Java than Haskell if they didn't already know any programming at all (and initially, people who hadn't programmed at all did better than people who had - they just had less baggage - but after a few weeks the experienced students did start to find ways of reusing existing knowledge).

    It's a challenge yes, but like I said earlier it's mostly marketing. You need to convince people to make the effort, and help them once they're convinced. But I don't really see it as that big of a problem.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Okay, I guess I my recollection confused that with the statement later where you said that it "doesn't scale" (49:40). Mea culpa. Anyway, that's unsupported too. At the end of the day you could always choose to write 100% of your code in the IO monad at which point you're no worse  (or better) off than an "effects-everywhere" language like C#.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Pretty sure you said explicitly that it wasn't general purpose in the video, at which point Eric protested wildly.

    Haskell was designed from the start to be general purpose, it even says so in the original documents from the early meetings. It was never intended to be some sort of domain-specific language. Indeed people are doing everything from designing hardware, to writing operating systems, to file systems, to 3D shooters, and web applications in it.

    One thing that should be pointed out here, is that the need for multiple languages is reduced if your main language is good at supporting EDSLs (Embedded Domain Specific Language). Haskell (and e.g. F#) does this via monads, other languages do it via e.g. macros, and it's very common to deal with half a dozen of them in any given app (in fact, doing IO itself can be seen as an EDSL, but things like STM, Parsers, non-determinism, dealing with XML, etc. are common too). That doesn't mean you will never need another language, just that it's less common. For example, Haskell people still generally use SQL, even though there are Database EDSLs in Haskell (similar to LINQ-to-SQL).
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    I didn't ask Eric or Joe because they didn't make the claims you did.

    Re: the compilation issue, I was talking about the whole compilation not just the compilation to IL - C# is jitted, so you can't compile it off line and statically link it to any random .o file you have. Bartok could do that, but in general that's a no go so far. This makes Haskell a lot easier to use in a lot of scenarios. For example, embedded software, or just general applications really (no need to distribute the .Net framework).

    "Writing hardware" doesn't mean "writing to hardware", it means writing actual hardware - i.e. desigining hardware circuits. See Lava for one Haskell library doing this, or VHDL and Verilog for domain specific languages for it.


    Haskell has a Data.Dynamic, which allows you to deal with dynamic data similar to how C# 4.0 does: it gives you a static type of "Dynamic" for values which have no known static type. It doesn't have any syntactic sugar for it like C# 4.0 does, but it doesn't need it to the same extent - see Parsec for an example of parser combinators in Haskell, the parser looks like you're just reading dynamic data and spitting out statically type counterparts. So the ability to write EDSLs really removes a lot of the need to deal with dynamic types at runtime. This doesn't work in every single case, clearly, but the point is that EDSLs can be used to deal with dynamic data in a way where the details of actually looking things up is hidden (sort of like how C# 4.0 has that interface you implement for method lookup).

    Strictly speaking Haskell can do anything C can do, as it's natively compiled (and even has a C backend on most compilers). You could definitely get it working on the Xbox 360 for example (I've been meaning to try to do that as I have an Xbox 360 devkit at work). It's not supported (i.e. a ready solution supplied by MS), but that's precisely what I'm arguing for so not really relevant - the point is that none of those things you mention pose any real problems for the language. On the flip side, writing an elegant EDSL in C# is practically impossible (this situation is not the same for F# though, which has Monads now).
    The point isn't to compare libraries (though you could use them as an example of something). I'm arguing precisely that MS needs to have a pure language with similar support, including libraries. The point is to compare how well the language itself does with a specific problem.
    As long as you have a C API to something (e.g. multi-touch API), getting it working in Haskell is probably easier than in C# (the foreign function interface is a lot nicer in Haskell IMO).

    Oh and being pure doesn't restrict what optimizations you can make, because you can choose to have local mutable state, or just bail out and do stuff in the IO monad, if you really need those optimizations. You're just forced to specify up front where this happens.
    Not being pure does restrict optimizations though, since many optimizations fail in the presence of mutable state. For example, merely writing to a pointer can cause a massive hit if the compiler isn't able to statically prove that none of the other pointers in scope doesn't alias the memory you just wrote too - if it's not able to do that, which in general it won't, then it needs to reload any data read from those pointers since the data held in registers may be stale. This is a simple example, there are lots of others. Take a look at the fusion/flattening transformations in DPH for example, they're absolutely crucial to make nested data parallel computations tractable, and they totally rely on the code being pure.


    I've never said Haskell is perfect, I'm saying it's better than C# in a lot of ways, and one important way is concurrency and parallelism. This discussion is being sidetracked from that, though, because I just feel it's important to refute some incorrect statements made by you and Charles claiming, on no evidence, that Haskell somehow isn't general purpose. It is.
    If I thought Haskell was perfect I wouldn't be advocating someone create a competitor to it now would I? I'm precisely saying that someone with pull needs to take the good bits from Haskell, and all the lessons learnt, and produce a new purely functional language and then sell it like no language has been sold before. I can give you list of things I feel need to be looked at, if you're really interested (could you?).
    I've never claimed that Haskell (nor any language) solves every problem, please don't put words in my mouth. And drop the ad-hominems too. Accusing me of being "religious" when you're the one who's arguing against a language you don't even know is a bit much.

    Bringing up popularity is not very convincing, IMO. Popularity is a function of a whole bunch of other things, language merit is a tiny part of it. It's mostly historical accidents (people don't use C because they think language research hasn't improved in the last 40 years - there are other factors that dominate).


    Charles, concurrency and parallelism may have been domain specific challenges five years ago. I'd argue that we're already past that point, and it'll only get worse - concurrency and parallelism are general challenges that will need to be adressed in any language claiming to be general purpose. C# is certainly making strides towards that, and I've already said that this is good stuff - certainly better than not doing anything, I just think there should be a "fully backed" MS alternative for those who are ready to accept the challenges of 5-10 years from now today.
  • Expert to Expert - Joe Duffy: Perspectives on Concurrent Programming and Parallelism

    Can you give an example of something it's not great at?

    Haskell is a general purpose language too, and you could argue it's more general purpose than C# (especially since C# can't easily be compiled - at least not with currently released comercial products; I'm aware of Bartok). Can you generate x86 assembly on the fly into a buffer and then jump into it and start executing in C#? Well, in Haskell you can and it's a breeze (see the Harpy library). Can you use C# to write hardware? How about reactive animation? How about financial modelling? Or parser combinators? Or automatically differentiating numerical functions? And even if you manage to answer "yes" to any of those (which you can), please do compare the amount of work required to get it running, and how well the abstractions work.

    Don't just say that Haskell isn't general purpose (while C# is) unless you can back it up.

    Also, you're very close to the turing tarpit of just saying "well all languages are equivalent, so it doesn't matter which one we use", but by that same reasoning we should all use assembly.