Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Ralf Lämmel

Ralf Lämmel user42

Niner since 2010

About what?
  • C9 Lectures: Dr. Ralf Lämmel - Going Bananas

    Yes, I think this lecture is packed. I will use your advice on interim breathing space Smiley for the next lecture which is underway. For the while being I can only think of the pause button as a workaround. 

  • C9 Lectures: Dr. Ralf Lämmel - The Quick Essence of Functional Programming

    @Todd: thank you! It's a pleasure to target the c9 audience with this lecture series.

    Monads are not at all complicated. In my experience, two things can go wrong with learning about monads. a) the explanation is too difficult, e.g., due to unnecessary references to category theory; b) the explanation is too low level.

    Wadler's paper, which I sort of executed (in part), motivates monads from a program development's point of view (in the sense of "suppose you have this sort of program, but now you need to accommodate this additional requirement; ok, and now try to generalize the solution through a parameterization technique").

    I am a big fan, generally, of using transformations as a tool for understanding programming techniques. There is the monadification transformation; it's not just good for explanation; it's actually useful in functional programming as a kind of refactoring.

    M. Erwig has written a wonderful paper about "monadification":

    http://web.engr.oregonstate.edu/~erwig/papers/abstracts.html#SCP04

    I also did a short paper a long time ago with a bit about monadification:

    http://homepages.cwi.nl/~ralf/sfp99.pdf

    Wadler also alluded to different forms of reuse that are present in the scenarios for the different monads. I gave explicit names to those forms: data extension and selective code replacement. This also shows that even the monad topic has a strong link to the expression problem.

    Well, while we are at it; it is intriguing to notice that monad (transformer) composition actually gives you another dimension of program extension---in addition to data and operation extensions. One may think of that additional extension form as having to do with aspects.

    De Meuter has written a thought-provoking (and potentially very confusing) paper some time ago on the relationship between monads and AOP:

    http://academic.research.microsoft.com/Paper/150465.aspx

    I intended to clarify this relationship, but, in fact, the following paper is potentially even more confusing:

    http://homepages.cwi.nl/~ralf/foal03/

    Ralf

  • C9 Lectures: Dr. Ralf Lämmel - The Quick Essence of Functional Programming

    @exoteric: the coverage of monad transformers was pretty minimal. Perhaps I should do a session soon where I solve some of the previous exercises/riddles. The one with the compound monad would be a nice exercise, indeed. One could look into the internals of the leveraged monad transformers and the result of composition and thereby understand that composition gives the same monad as the monolithic monad that one would define from scratch.

     

  • C9 Lectures: Dr. Ralf Lämmel - The Quick Essence of Functional Programming

    c9 is still a bit in migration pain, I guess Smiley There are known issues with tagging, hit count, and yes the tweeting and facebook stuff doesn't work for me either.

  • C9 Lectures: Dr. Ralf Lämmel - The Quick Essence of Functional Programming

    Thank you Charles for getting this mission accomplished.

    I just did a "monad" search on channel9, and this seems to be the best place for finding monad lectures. Smiley It is just truly amazing how much people care about monads, and what sort of miracles you can do with them.

    So this is the reason why I opted for a kind of history lecture---to show people, more or less, how Haskell's monads came about. So this is not really interesting for people who already have a few monads in their arsenal. But it should be interesting for people who would like to get a simple intro to monads based on classic and seminal material.

    If anyone would like to work towards matching the Haskell code in the repository with F#, Scala code or other functional code, please go ahead, just subscribe to the open-source project, and/or get in touch with me.

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

    One might say that your revised approach effectively does operate on Int (incl. negative numbers) in the sense of using this encoding of Int (even though you switch between alternatives through functions but not through constructors):

     

    data PosNat = One | Succ PosNat data Int = Zero | PosInt PosNat | NegInt PosNat

     

    ... and yes, you could be using the reader (environment) monad for passing around the data except that all your equations are non-oblivious to the extra argument (excerpt perhaps the interpretPos case for Zero). In such a setting, the reader monad gives you little leverage.

     

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Type Classes

    Yes, Common Lisp is way more powerful (and a tiny bit less typed) Smiley

    But seriously, I always argue that type classes are nothing extremely impressive for a dynamically typed programmer, certainly not for one who knows about and loves CLOS, for example.

    Type classes in Haskell just bring some bits of similar expressiveness to a strongly typed setting.

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

    Wonderful proposal!

     

    There is nothing essentially monadic about this code.

    In fact, I don't know how to usefully classify this approach, but is very systematic of course.

    It smells a bit like CPS because of the way you pass around the current result of interpreted constructors so far.

     

    At a general level, I think what you are proposing is to try to remain defined in cases where the partiality of a surface domain (such as Nat) can be accommodated for with a more liberal, internal domain (such as Int). I can see that this overall idea of being more restricted at the interface level than in the internal design of a system with intermediate results of computation makes very much sense.

     

    This is an unusual technique in interpretation Smiley

     

    Would you say that your code is sufficiently equivalent (and in intention and behavior) to the following perhaps more direct approach?

     

    data Term = Zero | Succ Term | Pred Term type Nat = Int -- We won't use negative numbers. interpret :: Term -> Maybe Nat interpret x = case interpret' x of n | n >= 0 -> Just n otherwise -> Nothing where interpret' :: Term -> Int interpret'
     Zero = 0 interpret' (Succ x) = interpret' x + 1 interpret' (Pred x) = interpret' x - 1 main = do print $ interpret $ Succ $ Succ $ Succ $ Pred $ Pred $ Zero 

     

    This version makes interpretation operate on Int but it projects to Nat at the very end.

     

    Cool, this made me thinking!

     

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

    re: "semantic domain change from Value = Nat to Value = Maybe Nat"

     

    We had first:

     

    interpret :: Term -> Value where Value = Nat

     

    Then we had (option 1):

     

    interpret :: Term -> Maybe Value where Value = Nat

     

    As you mention, we could also have done instead (option 2):

     

    interpret :: Term -> Value where Value = Maybe Nat

     

    The reason for favoring option 1 on my side is that I view partiality (i.e., the application of Maybe) really as "effect" involved in computing values. Here, I use the terms "values vs. computations" in the sense of monadic style already. (Interestingly, Wadler, in his essence paper, makes partiality part of his Value domain, see his constructor "Wrong", but it is only needed for as long as he doesn't replace it by the use of an error monad.) Another reason for my preference is that the used style for Value makes it also accessible to extension (as needed for Boolean values and functions later).

     

    Now, when I said that no semantic domain changed, I should really have emphasized that no *named* domain changed, but the result type of the meaning-giving function, i.e., the denotation type for terms, has changed. Precision would be gained by assigning an explicit name to the result type as in:

     

    interpret :: Term -> Meaning where Meaning = Maybe Term Value = Nat

     

    Thanks for catching this.

     

    re: "Admitting Succ(Pred Zero)"

    re: "interpret (Succ (Pred x))  \equiv interpret (Pred (Succ x))"

     

    Not sure. If you can suggest some patched code, perhaps, I can say whether I like it or not Smiley

     

    Do you agree with the following views?

     

    If someone really consumes the result of (Pred Zero) as in Console.WriteLn, then we have to throw.

     

    If someone's code is not strict in the result of (Pred Zero), then we can succeed (in a lazy language).

     

    Thanks for the comments. 

     

    Regards,

    Ralf

     

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

    Thanks! Bart's post is amazing.

     

    I would like to add a few bits.

     

    Bart mentions this Y combinator:

     

    static Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f) { FuncRec<T, R> fRec = r => t => f(r(r))(t); return fRec(fRec); } delegate Func<T, R> FuncRec<T, R>(FuncRec<T, R> f); 

     

    It is not so easy to see why it works. (It works because Bart leverages recursive types, c.f., FuncRec, and this is one way of getting a fixed point combinator; see here.) Anyway, let's try something else. Our Haskell-based Y-combinator, as mentioned in my talk, would be rendered in C# as follows:

     

    static Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f) { return f(Fix(f)); }

     

    You have to admit that this definition looks much simpler. We do not rely on recursive types; instead Fix is recursively defined. It is, in fact, the definitional property of a fixed point. That is, the fixed point of a function must be such that if we apply the function to the fixed point, then we get back the fixed point. The reason that it works in Haskell (or a lazy language in general) is that it essentially also captures the computational notion of recursive unfolding. That is, to compute the fixed point of a function, what we do is, we apply the function, and as we hit its recursive reference we just compute the fixed point, which means that we apply the function, and as we hit its recursive reference, but only then, we compute the fixed point, which means that we apply the function, etc. Smiley

     

    Now, the bad news is that C# is not lazy and hence the above transcription of our really easy to understand Y combinator does not work. We get a stack overflow (not just for greater arguments but for any arguments). However, there is simple trick that gives us a computationally useful Y combinator that is still easy to understand; we need to defer recursive evaluation until it is really needed, i.e., we need to make the application of the Y combinator lazy.

     

    Hence, this works:

     

    static Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f) { return f(t => Fix(f)(t)); } static void Main(string[] args) { var factorial = Fix<int,int>(fac => n => n == 0 ? 1 : n * fac(n-1)); Debug.WriteLine(factorial(5)); }

See more comments…