Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

Download

Right click “Save as…”

In part 3 of the Advanced Functional Programming lecture series, Dr. Lämmel focuses on the domain of language interpretation as a method of understanding some important functional programming techniques. As a side effect, some basics of programming language theory are also informally presented.

More specifically, this lecture develops an interpreter for a simple functional programming language that contains Booleans, natural numbers, lambdas, and recursive lets. The interpreter is actually developed in a stepwise manner, which is why the lecture is called "Evolution of an Interpreter."

In each step, another construct is added and the impact of the extension onto the interpreter is analyzed. In this manner, several interesting programming techniques are exercised. For instance, the Maybe type constructor is pervasively used for dealing with partiality, and Haskell's fixed point combinator is used to model the semantics (i.e., interpretation) of recursive bindings.

This lecture also prepares us for some more advanced subjects. For instance, the next lecture in this series will cover the intriguing subject of monads while using interpretation as the application scenario. Soon, generalized folds (or bananas, according to Erik Meijer) will also be discussed (the folds will traverse abstract syntax trees as opposed to lists).

Enjoy. Learn.

Thanks to Ralf for providing another excellent lecture!

Earlier lectures here.

Slides: https://developers.svn.sourceforge.net/svnroot/developers/repository/ralfs-channel9-lectures/decks/interpretation.pdf


Related Blog Post and Code:

http://professor-fish.blogspot.com/2010/08/bunch-of-interpreters-using-cpp-and.html

Tags:

Follow the Discussion

  • CharlesCharles Welcome Change

    Love the shirt, Ralf! Smiley
    C

     

    EDITPS: After this lecture, you should solve the problems/riddles on slides 16, 31, 38 in the deck. Then, read the Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire paper by our very own Erik Meijer and friends.

     

  • I promise a new shirt for each lecture. This provides a natural limit to the number of lectures we could expect.

  • CharlesCharles Welcome Change

    Excellent!

    C

  • Have you looked at that paper? The notation used makes it almost impenetrable. I appreciate the ideas but I couldn't make sense of it when I was learning Haskell ... Maybe after Erik's category theory lectures it will make sense to me. Smiley

  • I get what you are saying. I can assure you that my bananas lecture will be very eatable. I consider myself reasonably skilled when it comes to formal notation but have suffered headache (multiple times) because of Erik's bananas paper. Perhaps, it had been conceived indeed (I sense) in a very domain-specific geek mode. Also, I should add that the paper did not discuss all the kinds of bananas that we need in practice. For example, large bananas were missing. So I am looking forward to come clean with bananas. It's two lectures from now. Next one is about monads. I am geeking around with the code right now, and I can't wait to record the lecture.

     

    But hey yes, I also want to encourage Erik to tell us about category theory.

     

  • Bent Rasmussenexoteric stuck in a loop, for a while

    Actually, it provides you with great incentive to go shopping! Big Smile

     

    Watching...

  • CharlesCharles Welcome Change

    Yes. The symbolic expressions are ugly (or beautiful, depending on your perspective...), but the ideas are explained. The mathematics prove points, but the points aren't only mathematical. At any rate, yes, the paper is dense and the title makes it seem less so, perhaps.

     

    An entire lecture dedicated to Monads will be so welcome, Ralf. Thank you!
    C

  • All young grasshoppers please remember "wax on, wax off, wax on, wax off, ..."

  • CharlesCharles Welcome Change

    Thank you, grandfather. Smiley

    C

  • ParmenioParmenio () => {}

    another good article in the series. I need to go back and look more closely at the fixed point operator, as I kind of got a bit out of my depth when we got onto implementing recursion into the interpreter... but that's the whole point I guess: if I didn't get out of my depth there would be nothing to learn!

     

    Talking of the Fixed point operator/combinator it reminded me of Bart De Smet's post on implementing the trampoline for safe recursion in c#:

    http://community.bartdesmet.net/blogs/bart/archive/2009/11/08/jumping-the-trampoline-in-c-stack-friendly-recursion.aspx

    where he uses a fixed point combinator. I'm not sure I'd want to implement that in production code as none of my collogues would understand how it worked! I'd have to bow to simplicity and introduce local mutation and use an explicit stack rather than the function stack. It's a very interesting article though.

  • Thanks! Bart's post is amazing.

     

    I would like to add a few bits.

     

    Bart mentions this Y combinator:

     

    static Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f) { FuncRec<T, R> fRec = r => t => f(r(r))(t); return fRec(fRec); } delegate Func<T, R> FuncRec<T, R>(FuncRec<T, R> f); 

     

    It is not so easy to see why it works. (It works because Bart leverages recursive types, c.f., FuncRec, and this is one way of getting a fixed point combinator; see here.) Anyway, let's try something else. Our Haskell-based Y-combinator, as mentioned in my talk, would be rendered in C# as follows:

     

    static Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f) { return f(Fix(f)); }

     

    You have to admit that this definition looks much simpler. We do not rely on recursive types; instead Fix is recursively defined. It is, in fact, the definitional property of a fixed point. That is, the fixed point of a function must be such that if we apply the function to the fixed point, then we get back the fixed point. The reason that it works in Haskell (or a lazy language in general) is that it essentially also captures the computational notion of recursive unfolding. That is, to compute the fixed point of a function, what we do is, we apply the function, and as we hit its recursive reference we just compute the fixed point, which means that we apply the function, and as we hit its recursive reference, but only then, we compute the fixed point, which means that we apply the function, etc. Smiley

     

    Now, the bad news is that C# is not lazy and hence the above transcription of our really easy to understand Y combinator does not work. We get a stack overflow (not just for greater arguments but for any arguments). However, there is simple trick that gives us a computationally useful Y combinator that is still easy to understand; we need to defer recursive evaluation until it is really needed, i.e., we need to make the application of the Y combinator lazy.

     

    Hence, this works:

     

    static Func<T, R> Fix<T, R>(Func<Func<T, R>, Func<T, R>> f) { return f(t => Fix(f)(t)); } static void Main(string[] args) { var factorial = Fix<int,int>(fac => n => n == 0 ? 1 : n * fac(n-1)); Debug.WriteLine(factorial(5)); }

  • Hi,

     

    while watching this new lecture, at 8:00 after adding Pred into the simple language interpreter, you say there's no change in its semantic domain. I'd expect there is a change, from type Value = Nat to type Value = Maybe Nat, and the definition for Nat staying the same, as type Nat = Int. Does this make sense?

     

    And, thanks a lot for the lectures! Very interesting stuff, and a clear presentation. Can't wait for the next ones, monads especially. Interpreters are of course the essence of Monads (?) (and vice versa Smiley ). In that light, when you have { interpret (Succ x) = (Just.(+1)) $$ interpret x }, it could've been redefined as an optimizing monad to push the (+1) inside, hoping to catch the rogue Pred early on, making  { interpret (Succ (Pred x))  } always equivalent to { interpret (Pred (Succ x)) }, transforming it on the fly into just { interpret x }. Making { Succ(Pred Zero) } an invalid expression is too operational-minded IMO. I mean it in general, not here in this lecture of course where you have to keep things simple. And partial application would still be needed of course.

  • re: "semantic domain change from Value = Nat to Value = Maybe Nat"

     

    We had first:

     

    interpret :: Term -> Value where Value = Nat

     

    Then we had (option 1):

     

    interpret :: Term -> Maybe Value where Value = Nat

     

    As you mention, we could also have done instead (option 2):

     

    interpret :: Term -> Value where Value = Maybe Nat

     

    The reason for favoring option 1 on my side is that I view partiality (i.e., the application of Maybe) really as "effect" involved in computing values. Here, I use the terms "values vs. computations" in the sense of monadic style already. (Interestingly, Wadler, in his essence paper, makes partiality part of his Value domain, see his constructor "Wrong", but it is only needed for as long as he doesn't replace it by the use of an error monad.) Another reason for my preference is that the used style for Value makes it also accessible to extension (as needed for Boolean values and functions later).

     

    Now, when I said that no semantic domain changed, I should really have emphasized that no *named* domain changed, but the result type of the meaning-giving function, i.e., the denotation type for terms, has changed. Precision would be gained by assigning an explicit name to the result type as in:

     

    interpret :: Term -> Meaning where Meaning = Maybe Term Value = Nat

     

    Thanks for catching this.

     

    re: "Admitting Succ(Pred Zero)"

    re: "interpret (Succ (Pred x))  \equiv interpret (Pred (Succ x))"

     

    Not sure. If you can suggest some patched code, perhaps, I can say whether I like it or not Smiley

     

    Do you agree with the following views?

     

    If someone really consumes the result of (Pred Zero) as in Console.WriteLn, then we have to throw.

     

    If someone's code is not strict in the result of (Pred Zero), then we can succeed (in a lazy language).

     

    Thanks for the comments. 

     

    Regards,

    Ralf

     

  • Thanks for clarifying the (Maybe Value) thing for me. Smiley I really did have Meaning and Value conflated.

     

    Your "views" at the end of the post make total sense to me. About the code, I was thinking along the lines of

     

      interpret Zero = Just 0

      interpret (Succ x) = interpret' 1 x

      interpret (Pred x) = interpret' (-1) x

      interpret' n (Succ x) = interpret' (n+1) x

      interpret' n (Pred x) = interpret' (n-1) x

      interpret' n Zero = if n>=0 then Just n else Nothing

     

    This really goes to the separation of timelines (combination time vs execution/run time) as I see it as the essential feature of monads. To do something (here, (1+) ) after the processing is done, or while processing - after combining all the monadic actions, or while combining them.

     

    But it is an embellishement.

  • Wonderful proposal!

     

    There is nothing essentially monadic about this code.

    In fact, I don't know how to usefully classify this approach, but is very systematic of course.

    It smells a bit like CPS because of the way you pass around the current result of interpreted constructors so far.

     

    At a general level, I think what you are proposing is to try to remain defined in cases where the partiality of a surface domain (such as Nat) can be accommodated for with a more liberal, internal domain (such as Int). I can see that this overall idea of being more restricted at the interface level than in the internal design of a system with intermediate results of computation makes very much sense.

     

    This is an unusual technique in interpretation Smiley

     

    Would you say that your code is sufficiently equivalent (and in intention and behavior) to the following perhaps more direct approach?

     

    data Term = Zero | Succ Term | Pred Term type Nat = Int -- We won't use negative numbers. interpret :: Term -> Maybe Nat interpret x = case interpret' x of n | n >= 0 -> Just n otherwise -> Nothing where interpret' :: Term -> Int interpret'
     Zero = 0 interpret' (Succ x) = interpret' x + 1 interpret' (Pred x) = interpret' x - 1 main = do print $ interpret $ Succ $ Succ $ Succ $ Pred $ Pred $ Zero 

     

    This version makes interpretation operate on Int but it projects to Nat at the very end.

     

    Cool, this made me thinking!

     

  • Yes, but what if we didn't have full Int to use at the intermediate level? My approach would still work, as in

     

     interpret Zero = Just 0 interpret (Succ x) = interpretPos 1 x interpret (Pred x) = interpretNeg 1 x interpretPos n Zero = Just n interpretPos n (Succ x) = interpretPos (n+1) x interpretPos n (Pred x) = if n>0 then interpretPos (n-1) x
     else interpretNeg 1 x interpretNeg n Zero = if n>0 then Nothing else Just n interpretNeg n (Pred x) = interpretNeg (n+1) x interpretNeg n (Succ x) = if n>0 then interpretNeg (n-1) x else interpretPos 1 x

     

    This way we don't demand the existence of extended domain prior to our defining it (we didn't define negatives above, but we could, now - just using Either instead of Maybe).

     

    What I mean by my reference to monads is that I see it as essential to monads the separation of monad composition timeline and monad execution timeline, which makes optimization (pre-processing) while composing, possible - and that's what my version it doing. Also "monadic" is that my interpretPos/Neg pair encode and carry along the additional data. Or something like that. Smiley

  • One might say that your revised approach effectively does operate on Int (incl. negative numbers) in the sense of using this encoding of Int (even though you switch between alternatives through functions but not through constructors):

     

    data PosNat = One | Succ PosNat data Int = Zero | PosInt PosNat | NegInt PosNat

     

    ... and yes, you could be using the reader (environment) monad for passing around the data except that all your equations are non-oblivious to the extra argument (excerpt perhaps the interpretPos case for Zero). In such a setting, the reader monad gives you little leverage.

     

  • KashyapKashyap

    Hi Ralf,
    Could you please share the solution to the last riddle in the video?
    Regards,
    Kashyap

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.