Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Brian Beckman: Don't fear the Monad

Download

Right click “Save as…”

  • MP4 (iPhone, Android)
  • High Quality MP4 (iPad, PC, Xbox)
  • Mid Quality MP4 (Windows Phone, HTML5, iPhone)
  • Mid Quality WMV (Lo-band, Mobile)
  • MP3 (Audio only)
  • WMV (WMV Video)
Functional programming is increasing in popularity these days given the inherent problems with shared mutable state that is rife in the imperative world. As we march on to a world of multi and many-core chipsets, software engineering must evolve to better equip software engineers with the tools to exploit the vast power of multiple core processors as it won't come for free as it did in the recent past which was predictably based on Moore's law.

Of course, learning new ways to think about programming semantics and code patterns are not always straight forward. For example, most imperative programmers (which include most of us who build software for a living...) are somewhat perplexed by the notion of functions as first class data structures that can be combined to create powerful and composable systems. Languages like Haskell are pure functional languages and require programmers to think in a different way, often in a precise mathematical fashion where composing and chaining functions is "the Way".

Dr. Brian Beckman, a Channel 9 celebrity, astrophysicist and senior software engineer thought it would be a very good idea to address the complexity of monads in an easy to understand way: a technical conversation at the whiteboard with yours truly for Channel 9.

This video interview is the result of Brian's idea that he can in fact remove the fear of monads from anybody who pays attention to his explanation. Of course, you can't just cover monads in a vacuum (category theory is not really addressed here) so the context is functional programming (Brian covers functions and composable functional structures (function chains) and of course monoids and then monads).

Tune in. There's a lot to learn here and only Brian can make monads easy to understand for the rest of us!

Happy Thanksgiving to all the US Niners out there.

Enjoy.

Tags:

Follow the Discussion

  • Hot damn! Can't wait to dig into this video. Brian is always a pleasure to watch and learn from.
  • ChadkChadk excuse me - do you has a flavor?
    Brian Beckman = Awesome
    Hat = Awesome^2

    Brian Beckman * Hat = Awesome^3
    Thus this video = Awesome^3

    Cheers Charles. This looks REALLY good.
  • This is pretty cool. It's a bit abstract though. I can imagine people who don't know what monads are already get confused due to the lack of real examples.

    So let me try to comply, and just to be really clear I'll do an example in C#, even though it will look ugly. I'll add the equivalent Haskell at the end and show you the cool Haskell syntactic sugar which is where, IMO, monads really start getting useful.

    Okay, so one of the easiest Monads it called the "Maybe monad" in Haskell. In C# the Maybe type is called Nullable<T>. It's basically a tiny class that just encapsulates the concept of a value that is either valid and has a value, or is "null" and has no value.

    A useful thing to stick inside a monad for combining values of this type is the notion of failure. I.e. we want to be able to look at multiple nullable values and return "null" as soon as any one of them is null. This could be useful if you, for example, look up lots of keys in a dictionary or something, and at the end you want to process all of the results and combine them somehow, but if any of the keys are not in the dictionary, you want to return "null" for the whole thing. It would be tedious to manually have to check each lookup for "null" and return, so we can hide this checking inside the bind operator (which is sort of the point of monads, we hide book-keeping in the bind operator which makes the code easier to use since we can forget about the details).


    Here's the program that motivates the whole thing (I'll define the Bind later, this just to show you why it's nice).

     class Program
        {
            static Nullable<int> f(){ return 4; }       
            static Nullable<int> g(){ return 7; }
            static Nullable<int> h(){ return 9; }
           

            static void Main(string[] args)
            {
         

                Nullable<int> z =
                                  f().Bind( fval =>
                                  g().Bind( gval =>
                                  h().Bind( hval =>
                                  new Nullable<int>( fval + gval + hval ))));

                Console.WriteLine(
                        "z = {0}", z.HasValue ? z.Value.ToString() : "null" );
                Console.WriteLine("Press any key to continue...");
                Console.ReadKey();
            }
        }

    Now, ignore for a moment that there already is support for doing this for Nullable in C# (you can add nullable ints together and you get null if either is null). Let's pretend that there is no such feature, and it's just a user-defined class with no special magic. The point is that we can use the Bind function to bind a variable to the contents of our Nullable value and then pretend that there's nothing strange going on, and use them like normal ints and just add them together. We wrap the result in a nullable at the end, and that nullable will either be null (if any of f, g or h returns null) or it will be the result of summing f, g, and h together. (this is analogous of how we can bind a row in a database to a variable in LINQ, and do stuff with it, safe in the knowledge that the Bind operator will make sure that the variable will only ever be passed valid row values).

    You can play with this and change any of f, g, and h to return null and you will see that the whole thing will return null.

    So clearly the bind operator has to do this checking for us, and bail out returning null if it encounters a null value, and otherwise pass along the value inside the Nullable structure into the lambda.

    Here's the Bind operator:

            public static Nullable<B> Bind<A,B>( this Nullable<A> a, Func<A,Nullable<B>> f )
                where B : struct
                where A : struct
            {
                return a.HasValue ? f(a.Value) : null;
            }

    The types here are just like in the video. It takes an "M a" (Nullable<A> in C# syntax for this case), and a function from "a" to "M b" (Func<A, Nullable<B>> in C# syntax), and it returns an "M b" (Nullable<B>).
    The code simply checks if the nullable contains a value and if so extracts it and passes it onto the function, else it just returns null. This means that the Bind operator will handle all the null-checking logic for us. If and only if the value that we call Bind on is non-null then that value will be "passed along" to the lambda function, else we bail out early and the whole expression is null.
    This allows the code that we write using the monad to be entirely free of this null-checking behaviour, we just use Bind and get a variable bound to the value inside the monadic value  (fval, gval and hval in the example code) and we can use them safe in the knowledge that Bind will take care of checking them for null before passing them along.

    There are other examples of things you can do with a monad. For example you can make the Bind operator take care of an input stream of characters, and use it to write parser combinators. Each parser combinator can then be completely oblivious to things like back-tracking, parser failures etc., and just combine smaller parsers together as if things would never go wrong, safe in the knowledge that a clever implmenetation of Bind sorts out all the logic behind the difficult bits. Then later on maybe someone adds logging to the monad, but the code using the monad doesn't change, because all the magic happens in the definition of the Bind operator, the rest of the code is unchanged.

    Finally, here's the implemenation of the same code in Haskell (-- begins a comment line).

    -- Here's the data type, it's either nothing, or "Just" a value
    -- this is in the standard library
    data Maybe a = Nothing | Just a

    -- The bind operator for Nothing
    Nothing >>= f = Nothing
    -- The bind operator for Just x
    Just x >>= f = f x

    -- the "unit", called "return"
    return = Just

    -- The sample code using the lambda syntax
    -- that Brian showed
    z = f >>= ( \fval ->
         g >>= ( \gval -> 
         h >>= ( \hval -> return (fval+gval+hval ) ) ) )

    -- The following is exactly the same as the three lines above
    z2 = do
       fval <- f
       gval <- g
       hval <- h
       return (fval+gval+hval)


    As you can see the nice "do" notation at the end makes it look like straight imperative code. And indeed this is by design. Monads can be used to encapsulate all the useful stuff in imperative programming (mutable state, IO etc.) and used using this nice imperative-like syntax, but behind the curtains, it's all just monads and a clever implementation of the bind operator!
    The cool thing is that you can implement your own monads by implemnting >>= and return. And if you do so those monads will also be able to use the do notation, which means you can basically write your own little languages by just defining two functions!

  • This example is perfection!  Thanks, Sylvan.
  • Allan LindqvistaL_ Kinect ftw
    cool video Smiley
    but i must say this.. saying something is easy and simple and not scary does not make it so Smiley i do understand monads slightly better than i did before but i can not say that i understand them fully.. its a great vid but perhaps(or most likly) im just stupid, but id really need some more concrete examples. the part where brian says "this is your select and this is your.." more stuff like that Smiley

    also i still have a problem seeing this as generaly useful.. i do see it useful, and very much so, for stuff like linq and math and the like, but for general programing? im a grassroot programmer (sorta, i do design too, its a small company) and i just have a hard time seeing how one whould write a whole program in something like f#.. how do you do ui(say a textbox) without mutable state? or something like an iterator? i just dont get that.. now i know that you can declare stuff as muteable in f#, but that makes it "less pure" right? but how whould you do that in a pure functional language? ofcourse that does not mean there is no way to do it Smiley but i just dont see it..

    then again perhaps that is not the point at all, but it seems from the jaoo vids that everyone "in the know" thinks that imperative is crap and everything should be written in functional languages.. i just dont.. agree Smiley for me it seems that the best thing whoudl to have functional constructs and ideas in an imperative language (aka c#3)
    because as anders said in another video: imperative languages are a superset of functional

    also on a final note(my post is starting to get long) for me, its kind of ofputting when someone says that it impossible to make mistakes.. if there is but one universal truth it must be "there are no silver bullets",  so if a claim is made that no mistakes can be made, one of two things must be true:
    1. the statement is false, errors can be made
    2. the language/technique does not allow anything to be created, as all created thing can potentially contain errors.

    now i realize that brian does not mean all errors but still, more clarity as to what errors he does mean is needed for me

    in all the interviews where functional programming is mentioned the same thing keeps popping up: "there can be no errors as long as you follow such and such rules" but.. how can you be sure that the rules have been followed? is that trivial? is it even possible? is it also trivial to express your problem in such a way that they adhere to these rules? i somehow doubt that..


    the kinds or errors that can be avoided and the ones that cant, thats is something that for me atleats need to be elaborated alot more..

    none the less, its a great and interestign video Smiley functional programming is cool but its no silver bullet (nothing is) and i think more focus really should be but on that.. otherwise people will just get mislead and have false hopes about fuctional languages and thats not good for anyone.
  • Allan LindqvistaL_ Kinect ftw
    by the way i really really really want to understand fuctional programming even though im hardcore imperative by schooling

    i found this blog post
    http://www.atrevido.net/blog/2007/08/12/Practical+Functional+C+Part+I.aspx

    that really tries to explain functional programing in terms of c#3, something that for me at least is much easier to understand

    i havent read all the parts yet but everyone who has trouble understadning this functional stuff should check out that post Smiley
  • aL_ wrote:
    how do you do ui(say a textbox) without mutable state?


    You use a monad to model the mutable state!
    Btw, F# isn't purely functional, so you would just use standard .Net functions, but if you do want to understand how the purely functional langauges do it, read on!

    Basically, how would you do IO in a purely functional language? Well it's tricky isn't it? You need to ensure that functions only depend on their inputs, so how would you go about reading a line of text from the terminal, say? You could do something like this:

    getLine :: World -> (String,World)

    Basically a function that takes in the "World", and returns a string and a new World (one in which the input buffer of your computer contains a couple of characters fewer). This is just a semantic model, obviously the compiler doesn't actually pass around the whole world! So it's just a trick that makes "getLine" behave like a real function, even though it reads from the terminal.
    Then you would just pass this new World into the next IO function which returns the next World and so on. Of course, there's an obvious problem here. What happens if you accidentally pass in an old World into a function? Oops! We blow up! There's no way of actually storing the whole world, so this only works if the "World" is just some abstract "token" that we pass around to make our semantics hold. If we use an "old" world then we need to retrieve a state of the world that no longe exists!

    So... Monads to the rescue! What if the monad, called "IO a" (see "M a" in the video) keeps track of the World object for us? So
    getLine :: IO String
    would represent an action which will return a string from the "World", and keep track of the new World returned. The bind operator (>>=) would be in charge of passing the World value from action to action. So as we compose actions together using >>=, we ensure that we always pass along the "new" World to the next monadic value down the line, and thereby guarantee that there is no way to use an old "World" value. Note that "getLine" is a value which represents the action of reading from the terminal, it doesn't actually do anything (it just models it, sort of like a shell script - they're just completely inert sequences of characters, you need to actually run them before they do anything. In Haskell, only the compiler knows how to run IO actions, so to the programmer everything is perfectly pure).

    Here's how a simple hello world program looks in Haskell without syntactic sugar:

    main = getLine >>= ( \ name ->
                putStr ("Hello, " ++ name ) )

    So first we grab the monadic value "getLine" and compose it with the lambda on the right (see the video). The >>= is implemented so that the "putStr" function gets passed the World that is returned by the getLine behind the scenes, but the user never has to see the World being passed around. All that is hidden by the bind operator!
    Here's the version with syntactic sugar:

    main = do
      name <- getLine
      putStr ( "Hello, " ++ name )

    This looks very similar to any imperative language right? So voila! We have successfully embedded IO in our purely functional language, by hiding the "World passing" inside a monad! This is pretty much the main reason why monads are such a big deal in purely functional lanuages, but as we have seen there are many other monads as well. One of the more interesting ones is the ST monad. This monad just models mutable references. The difference between this one and the IO monad is that a value in the ST monad can actually be run by the programmer, which means that you can write functions that use mutable state, but as long as that mutable state doesn't leak out (the compiler checks this) you can wrap them up behind a pure function interface (using a function called "runST").

    Also, regarding the "imperative languages are a superset of functional" statement. That's not necessarily true. Imperative langauges sacrifice referential transparency, so because it doesn't have everything purely functional languages has, it's not a superset.
    Also, as I've alluded to, pretty much all of the useful things about imperative programming (mutable state, IO) can be encapsualted in a purely functional language using monads, so in that sense imperative programming is a subset of functional languages!
  • So monads convert functions from this

    int f(int x)

    to this

    int* f(int x)

    where int* is int plus some encapsulation of relevant things that don't fit into your functional model, e.g., I/O, mutable state, etc.  Lambda functions are then used to transform int* back into int.

    int* => int

    The lambda functions allow the converted functions to seem monoidal like the original functions by isolating their side effects from the original functions.

  • JChung2006 wrote:
    

    So monads convert functions from this

    int f(int x)

    to this

    int* f(int x)

    where int* is int plus some encapsulation of relevant things that don't fit into your functional model, e.g., I/O, mutable state, etc.  Lambda functions are then used to transform int* back into int.

    int* => int

    The lambda functions allow the converted functions to seem monoidal like the original functions by isolating their side effects from the original functions.



    Yes, I think you're correct (if I understand what you mean). But I think it's worth pointing out that monads are not just for encapsulating things which don't look functional. LINQ is eminently functional, for example.
    Monads is like a design pattern that can be used to model certain libraries. Some of these libraries may model imperative features (IO, state, etc.), but some of them do perfectly ordinary things that just happen to fit monads very well (Parsers, queries, etc.).
  • Here's another little twist ... I avoided saying this in the video, but Sylvan has given me the segue Smiley

    Eager evaluation is, in fact, a side effect!  Think about it: eager evaluation means that you KNOW function arguments get evaluated BEFORE the function gets called. So, a PURE functional language must be a lazy language (more technically, "non-strict," and sorry for the confusing lingo: STRICT means non-lazy).

    So we know that F# isn't purely functional because it's not lazy. That's ok, the monad style in F# / C# / VB is still very useful for compositional designs, even if the languages are not "pure." Some people call F# and ML and OCaml and Scheme, etc., "almost functional." 80% of the time, there won't be any difference between pure-functional and almost-functional. But you have to be careful at the fringes.

    You can build lazy libraries and such and get all the way to pure functional if you need it. Heck, you can implement Haskell in F#, so there Smiley

    There's lots more to say about this, so stay tuned.
  • Thanks, it is very cool video.

    Keep them coming as it is great to see how functional programming techniques are being adopted and explained in the wider scope Cool
  • Allan LindqvistaL_ Kinect ftw
    sylvan wrote:
    
    So... Monads to the rescue! What if the monad, called "IO a" (see "M a" in the video) keeps track of the World object for us? So
    getLine :: IO String
    would represent an action which will return a string from the "World", and keep track of the new World returned.


    but does that not mean that "IO a"/"M a" have state? Expressionless

    sylvan wrote:
    
    In Haskell, only the compiler knows how to run IO actions, so to the programmer everything is perfectly pure).


    ok but lets say you wanted to get the current time of brians clock for example.. that isnt something the compiler can know about (right?) how can that be done in a pure functional language? (im not saying its impossible, i just dont get how thats done)

    or is it conciderd pure if the state is just completely encapsulated? in that case an can an object that has some private variables and public methods be considerd "pure" from a functional standpoint?

    sylvan wrote:
    
    Also, regarding the "imperative languages are a superset of functional" statement. That's not necessarily true. Imperative langauges sacrifice referential transparency, so because it doesn't have everything purely functional languages has, it's not a superset.
    Also, as I've alluded to, pretty much all of the useful things about imperative programming (mutable state, IO) can be encapsualted in a purely functional language using monads, so in that sense imperative programming is a subset of functional languages!


    ok, im not sure what you mean by referential transparency but im goning to have to take your word for it Smiley i only heard Anders hejlsberg make that comment in another video and thats where i got it from.. it just seems to me that you can do functional programing in a imperative language just by deciding not to use mutable state, ie never reassign a variable Smiley but again, im not an expert on functional programming and there might (and apperantly are) more diffrances that that. but i just trusted that anders knew what he was talking about Smiley (erik meijer was in the room to and didnt seem to disagree)

    it should atleast be easier to dicepline your self to follow rules that the compiler doesnt care about, than breaking rules in the language. that is, decide to not use mutable state in a language that allows it (imperative) than try to use mutable state in a language that does not allow it (functional) right? (again i dont know, but it seems that way to me Smiley)
  • "Referential Transparency" just means that every time you "refer" to an expression, you get the same value. It's very nearly the same thing as "no state" or "immutable variables" or even "evaluate lazily." Sin(35) is always Sin(35).

    More carefully, it means that if you define a variable or a function with an equation, then you can always substitute the right-hand-side of the definition any time the variable or function appears anywhere in an expression. If you define

       fact(n) = (n <= 0) ? 1 : n * fact(n - 1)

    then ANY place you see fact(x) you can just subtitute (x <= 0) ? 1 : x * fact(x - 1).  Notice how I slyly slipped in the change of formal variable? The names of lambda variables don't matter so long as they don't collide with free variables.

    Also notice how this would get messed up if there were side effects going on against internal mutables, like the stateful machinery inside "random:"

       nonreftran(x) = (x <= 0) ? 1 : random() * x * nonreftran(x - 1)

    You can't replace every reference to "nonreftran" with its definition and be sure to get the same answer ever time. Even evaluation isn't referentially transparent always:

       runaway(x) = concatenate(x, runaway (x + 1))

    The side-effect on the last one would be "eager evaluation" of runaway(x + 1). If that's evaluated lazily, then 

       takefirst(10, runaway(0))

    is safe. If eagerly, then it runs on. That explains why you really can only do "almost functional" programming unless you have lazy evaluation. Under eager evaluation, most of the time you're functional <-> referentially transparent. But sometimes you're not and you "fall off the boat."
  • aL_ wrote:
    

    but does that not mean that "IO a"/"M a" have state?


    Well yes, but that's perfectly fine, because it's just a regular value.
    Again think of it like a shell script. You can write completely pure function that just paste shell scripts together right? I mean they don't actually do anything, they just model a variety of impure actions. So as long as youre just building values that represent impure actions your safe. So I'm sure you can see why it's perfectly pure to paste text strings that represent shell scripts together right? I mean they're just data! Well IO in Haskell works the same way. The IO actions are just data. They don't do anything, they're just an abstract representation of actions that need to be run in order to do anything.

    There is no function of type "IO a -> a" in Haskell, so there is no way of "running" these actions for the programmer. The compiler takes a value called "main" of type "IO ()" ("()" is like "void"), and then, conceptually, it produces C code (or something) which is allowed to run your IO action. The point is that from Haskell's point of view, your IO actions are just data that represents various things you can do with a computer, and the programmer just glues together little actions like this into larger actions using >>=.

    aL_ wrote:
    

    ok but lets say you wanted to get the current time of brians clock for example.. that isnt something the compiler can know about (right?) how can that be done in a pure functional language? (im not saying its impossible, i just dont get how thats done)


    By modeling an IO action which represents the action of reading the value of the clock from the operating system. Again, think of it as a shell script. You can write a text string which represents the bash script that reads from the clock, right? Same thing here. The IO actions are just abstract values that represent the impure things, they don't actually do impure things (they need to be run in order to do something, and as I've said, the programmer has no way of doing this, only the compiler does).

    So in order to actually do anything with the value of the clock you would build a value of type IO a that represents the action of reading the time from the OS, and passing it to a pure function to do some calculations with it. That pure function is just regular functional programming, but at "the bottom" there's a layer of IO values which represent the impure bits.

    And again, refer back to the original example. This looks just like writing imperative code in any language, so it's not like it makes it any more difficult to use (you can pretend that the IO monad is just "the impure parts of Haskell" and be perfectly fine with that, but once you realise that it's actually just a way of building up values which represent impure computations, you can do much more cool stuff because now you can pass these values around in data structures, pass them around to general functions that work over any monad etc.).


    aL_ wrote:
    
    ok, im not sure what you mean by referential transparency


    It's just that you're guaranteed that a function only depends on its input. C# doesn't guarantee that, which means you can never strictly speaking call, for example, a virtual function (since that's unknown code) while within a context that requires purity (like a LINQ expression, or a transaction). In practice you just ignore it and hope nothing goes wrong, but it's much nicer if you can actually be sure that nothing will go wrong by having a language where side effects would be visible in the types.


    aL_ wrote:
    than try to use mutable state in a language that does not allow it (functional) right? (again i dont know, but it seems that way to me )


    Again, you can model imperative programing within a pure context. There are cases when mutable state is necessary for various algorithms, so in Haskell you would do this by building up a value in the ST monad, and while you're doing this you can use mutable variables, mutable arrays etc. Then at the end when you've composed all of these smaller ST actions ("readSTRef", "writeSTRef", etc.) into a large ST action, you pass it to a function called "runST" which returns a pure function. The compiler checks that the action you pass into runST doesn't "leak" any mutable state, so it's perfectly legal to have local mutable state, as long as it doesn't leak out.

    EDIT:
    I should clarify all of this by saing that in my opinion, the practical everyday-programmer-Joe implications of using a pure language these days are fairly non-controversial. Purely functional programming today no longer means "no mutable state ever", it just means "side effects are visible in the types". Haskell is full of side effects! You can go nuts with side effects if you want! But what you can not do is accidentally call a function with side effects, because all side effects are visible in the type. So I think that the perceived difficulty in using purely functional programming is probably a bit exaggerated these days, as all it means is "we're explicit about where side effects occur".

  • Allan LindqvistaL_ Kinect ftw
    brianbec wrote:
    "Referential Transparency" just means that every time you "refer" to an expression, you get the same value. It's very nearly the same thing as "no state" or "immutable variables" or even "evaluate lazily." Sin(35) is always Sin(35).


    ok.. but Math.Sin(35) also always return the same thing.. isnt that referentially transparent then? if so that means that you can get referential transparency transparency in a imperative language as well right? then i dont understand what sylvan means when he says functional languages has Referential Transparency and imperative dont O_o

    also, couldnt you have a random function and include it some other function in say, haskell? then that whould not be referentially transparent right?

    i dont see how this is bound to the language or language paradigm in question Expressionless

    brianbec wrote:
    
    That explains why you really can only do "almost functional" programming unless you have lazy evaluation. Under eager evaluation, most of the time you're functional <-> referentially transparent. But sometimes you're not and you "fall off the boat."


    ok.. i understand i think Smiley but is that really bound to language? linq uses deffered execution right? is that the same as lazy evaluation?
  • Christian Liensbergerlittleguru <3 Seattle
    This video makes me remember again, why I really love Haskell! Smiley Thank you for the interview and your time guys.

    I think it was a good explaining of Monads and remembered me a little bit of the Math1 course that I did in my first year at uni.
  • aL_ wrote:
    
    ok.. but Math.Sin(35) also always return the same thing.. isnt that referentially transparent then? if so that means that you can get referential transparency transparency in a imperative language as well right? then i dont understand what sylvan means when he says functional languages has Referential Transparency and imperative dont O_o


    Well, it's a bit like saying that a language is strongly typed. You can adhere to types in C and never do any casts, but as there is nothing stopping you from interpreting a float as a pointer, say, it can not be said to be strongly typed.

    Math.Sin could, for all you know, on the ten thousandth invocation decide to format your hard drive!

    So if a language has referential transparency that means that functions always return the same value for the same input. Now clearly this isn't true in C#, say, as a function can contain reads and writes to global memory right? So you lose the property that functions are referentially transparent. Right? It is not true that a function in C# always returns the same result for the same inputs. It may be true in certain cases here and there, but it doesn't mean it holds true for the language in general.

    So I think you understand the concept, but are confused by what it means when used with respect to a language. When one says that a purely functional language is referentially transparent, that means that the property of referential transparency holds for the language in general. Every single function is always referentially transparent. This clearly isn't true for imperative languages.

    aL_ wrote:
    
    also, couldnt you have a random function and include it some other function in say, haskell? then that whould not be referentially transparent right?


    Nope, you can't. Random numbers have to be dealt with differently in Haskell (precisely because the language guarantees that every function must be referentially transparent). One way is to generate an infinite lazy stream of random numbers up front in the IO monad, and then pass that list of random numbers into whatever function needs them.
    Remember that random numbers typically aren't random at all. They're a sequence of pseudo-random numbers where the next one can be computed from the previous one (plus, possibly, some extra values). So you can have a function like:

    random :: RandomGenerator -> ( Double, RandomGenerator )

    And as you can see this looks very much like my IO example above (in fact, all so called "state monads" look like this), so you can wrap this "random number supplier" behaviour in a monad too!
    So you should start seeing now why monads, while useful in C# and other languages for things like LINQ, are extra super useful in purely functional languages as they allow us to work around apparent limitations in ways which are very similar (in terms of convenience) to how they're handled in imperative languages, and we don't have to lose referential transparency to do it!
  • Allan LindqvistaL_ Kinect ftw
    sylvan wrote:
    
    Well yes, but that's perfectly fine, because it's just a regular value.
    Again think of it like a shell script. You can write completely pure function that just paste shell scripts together right? I mean they don't actually do anything, they just model a variety of impure actions.

    ok.. but arent all code just a model of actions? the code it self desnt do anything, it just pastes together peices of assembly code that does the actuall stuff right? 

    it appears to be a question of where one draws the line of abstraction.. lets say i write a program in c# that contains two asseblies. in one assembly i put all the impure mutable stuff, and in the other i just call the functions exposed by the first assembly.

    if i make sure that i have no mutable state in the second assembly and that assembly only combines functions from the first assembly, could'nt that second assembly be called functionally pure? (hope that made sense)
    sylvan wrote:
    
    I'm sure you can see why it's perfectly pure to paste text strings that represent shell scripts together right? I mean they're just data! Well IO in Haskell works the same way. The IO actions are just data. They don't do anything, they're just an abstract representation of actions that need to be run in order to do anything.

    There is no function of type "IO a -> a" in Haskell, so there is no way of "running" these actions for the programmer. The compiler takes a value called "main" of type "IO ()" ("()" is like "void"), and then, conceptually, it produces C code (or something) which is allowed to run your IO action. The point is that from Haskell's point of view, your IO actions are just data that represents various things you can do with a computer, and the programmer just glues together little actions like this into larger actions using >>=.

    insted of a shell script, can you think of these things we glue together as calls to some external thing that is not functionally pure and can contain mutable state? 

    so functional programming is dependent on these other non functional things or shell scripts as you call them, to do stuff?
    and these impure actions are not part of the language?
    its a bit confusing because it seems that monads represent these impure actions and monads are a part of the language but the actual impure stuff is not.. im not sure i understand how that works, but is that a correct assesment?

    what if you want to add something new that is mutable in nature? like the position of a guy in a game for example, whould you write the mutable part of the game(that is the state of the game world) in something inpure (or in an impure way) and then "model" various things in the world using pure functions? (like a gravity function or something)
    sylvan wrote:
    
    By modeling an IO action which represents the action of reading the value of the clock from the operating system. Again, think of it as a shell script. You can write a text string which represents the bash script that reads from the clock, right? Same thing here.

    ok but at some point i actually do need to describe the thing that i want done right? in other words, i need to write that shell script that reads the current time from the os. and that cant be done in a pure functional way as i understand it.. am i correct?
    lets say i want to keep track of how many time some arbitrary thing has happend.. the os has no idea about that. whould one write a "shell script" (or impure c# method) to track that then? 
    sylvan wrote:
    
    That pure function is just regular functional programming, but at "the bottom" there's a layer of IO values which represent the impure bits.

    ah.. so functional programming is dependant on this lower level that does all the dirty work of mutable state so to speak?  Smiley and this bottom layer does not have to be the os right? it can be a bunch of c# methods?
    sylvan wrote:
    
    ... it's much nicer if you can actually be sure that nothing will go wrong by having a language where side effects would be visible in the types.

    ok.. when you say "visible in the types" do you mean that some other type is returned if the function has a side effect?
    sylvan wrote:
    
    The compiler checks that the action you pass into runST doesn't "leak" any mutable state, so it's perfectly legal to have local mutable state, as long as it doesn't leak out.

    Purely functional programming today no longer means "no mutable state ever", it just means "side effects are visible in the types". Haskell is full of side effects! You can go nuts with side effects if you want! But what you can not do is accidentally call a function with side effects, because all side effects are visible in the type. So I think that the perceived difficulty in using purely functional programming is probably a bit exaggerated these days, as all it means is "we're explicit about where side effects occur".

    so there is mutable state in functional programming but its very tightly encapsulated?

    i must say that the most confusing thing about functional programming and functional languages is this notion  that there is no mutable state.. this more and more seems to be a truth with modification to me.. there is mutable state, but its just encapsulated, right?


    one last thing (again sorry for the loooong posts)
    i just want to say that i have no religious affinity to imperative languages Smiley i just want to understand how functional programming works and what it really is.. and the easiest way to do that for me at least is to relate it to something i already know; in my case imperative programming Smiley i have absolutly no desire to prove that one is better than the other and im sorry if i have made it sound like that
  • Allan LindqvistaL_ Kinect ftw
    sylvan wrote:
    
    So if a language has referential transparency that means that functions always return the same value for the same input. Now clearly this isn't true in C#, say, as a function can contain reads and writes to global memory right? So I think you understand the concept, but are confused by what it means when used with respect to a language. When one says that a purely functional language is referentially transparent, that means that the property of referential transparency holds for the language in general. Every single function is always referentially transparent. This clearly isn't true for
    imperative languages.

    yeah i can buy into that Smiley i agree that c# functions are not generally referentially transparent. so the key here is to differentiate between a language that is referentially transparent and a perticular function that is referentially transparent, correct? Smiley
    sylvan wrote:
    
    Random numbers have to be dealt with differently in Haskell (precisely because the language guarantees that every function must be referentially transparent).

    ok.. im goning to go out on a limb here.. so then random numbers have a diffrent type than say a regular int? Smiley
    sylvan wrote:
    
    So you should start seeing now why monads, while useful in C# and other languages for things like LINQ, are extra super useful in purely functional languages as they allow us to work around apparent limitations in ways which are very similar (in terms of convenience) to how they're handled in imperative languages, and we don't have to lose referential transparency to do it!

    yeah.. since mutable state is forbidden in functional languages i see why monads that apparently can create mutable state are useful, because its hard to do stuff entierly without mutable state (you gotta agree on this one right?Smiley)

    i still have a problem understanding how monads can have their cake and eat it to.. how they can have (or appear to have) mutable state and still not violate any rules of purity.. Expressionless but i've taken up enough of your time alerady i think Smiley thanks for trying to explain all this stuff:) beeing a subborn little bugger, i'll reserch this monad buisness more till i understand it fully (or at least better) 

    btw i found another blogpost trying to explain functional programming in terms of c#3:

    http://geekswithblogs.net/akraus1/articles/79880.aspx
  • aL_ wrote:
    
    ok.. but arent all code just a model of actions? the code it self desnt do anything, it just pastes together peices of assembly code that does the actuall stuff right?


    Well yes, but I'm talking about actual values, not source code. E.g. in C you might have something like:

    char* readTheClockAction = "time /t"; // use DOS command

    readTheClock is now an abstract representation of the action that reads time. So you can undsrtand how we can use various string processing functions to splice together lots of these and get all sorts of "impure" behaviour into a little "script" right?
    Where this shell script analogy falls apart is when you actually want to interact with the results of these impure values, but monads can deal with this just fine.
    As you know the >>= operator takes a function on the right hand side. The parameter of this function would be a variable that gets bound to the result of the action on the left hand side. For example:

    getClockTime >>= \ t -> putStr ( "The time is " ++ show t)

    So, "getClockTime" is one of these little "scripts" (or "actions" as they are most often called), and so is "putStr". The >>= is just a really intelligent operator for pasting scripts together, in that it can bind a variable name to the output of the first action. This means that the perfectly pure operation of prepending "The time is ", and converting the time to a string ("show t") can be embedded within the script. So "show" and "++" are both completely pure functions.

    And again, just for clarity, here's the "nice" syntax for the above (this time without the layout dependent syntax, just to make it look even more familiar, with curly braces and all!)
    do {
      t <- getClockTime;
      putStr ( "The time is " ++ show t );
    }

    So you see these "actions" can actually use pure functions (but pure functions can't use actions). So a functional program has this "layer" at the bottom that consists of these actions, that every now and then call into the purely functional "core" of the application (where all the algorithms are).

    aL_ wrote:
    
    in one assembly i put all the impure mutable stuff, and in the other i just call the functions exposed by the first assembly. if i make sure that i have no mutable state in the second assembly and that assembly only combines functions from the first assembly, could'nt that second assembly be called functionally pure?


    No, because the second assembly calls impure functions (in the first assembly) so it too would also need to be impure. At least for the IO monad.

    Calling an IO function is impossible from a pure function. The only thing you can do with IO actions is to combine them into larger IO actions. Which means that the only place you can use an IO action like getClockTime, is from within another IO action.

    For the ST monad (which only give you access to mutable variables, no other IO) you can wrap it up into a pure function (the type system guarantees that this is safe - so it's still referentially transparent), and in that case yes, you can write little snippets of "impure code" (again, it's really just monadic data that looks like impure code, and models it) here and there, and then wrap it behind a pure interface. The following nonsensical example illustrates

    -- increment the contents of a reference
    inc ref = do {
       val <- readSTRef ref;
       writeSTRef ref (val+1);
    }

    plusOne :: Int -> Int
    plusOne x = runST ( do{
       ref <- newSTRef x;
       inc ref;
       result <- readSTRef ref;
       return result;
    } )

    plusOne is completely pure, even though internally it uses mutable variables (though there is no good reason for it in this case!). runST has a really clever type based on something called "rank 2 polymorphism" (ignore it for now) which ensures that it's impossible for any of the "mutable stuff" to "leak out" and result in referential transparency being violated. plusOne is thus a completly legal pure function.

    aL_ wrote:
    
    it seems that monads represent these impure actions and monads are a part of the language but the actual impure stuff is not.. im not sure i understand how that works, but is that a correct assesment?


    A bit handwavy:

    It's a bit like XML in C#. The language itself has no way of actually dealing with XML natively, but that doesn't mean you cant do XML stuff, right? So you can use various XML libraries to produce XML documents, even though the language has no notion of XML. As far as the language is concerned, all it's doing is dealing with data (strings in this case).

    In Haskell the language has no native notion of IO, but you can use libraries to produce IO "actions". The compiler happens to know about these actions, and produces assembly code to "interpret" them. The end result is that we get the purity (after all the Haskell code just produces values, just like C# code just produces text strings that happen to confrom to XML, and neither knows nor cares that these values happen to represent IO actions, as opposed to cooking recipes or whatever).

    So it is a bit of a trick, but an extremely useful one, since it lets us have our cake and eat it to. We can remain pure as the code we write to do IO is actually just composing data values together (without actually doing anything) that the runtime system (written in C, say) can then "interpret". But at the same time we get a very convenient syntax that looks like imperative code for doing the typically imperative tasks. And again, lots of Haskell programmers just think of IO as "impure Haskell" and don't even care that it's in fact completely pure through a clever semantic trick.

    I should point out that the runtime doesn't actually do any runtime interpreting. The compiler spits out optimized native code just like in any other native language, but it's a useful semantic model.

    aL_ wrote:
    
    what if you want to add something new that is mutable in nature? like the position of a guy in a game for example,


    Well what makes you think it needs to be mutable? Is a position really mutable? Does a positon disappear just because you move to a different position?
    In a purely functional way the position wouldn't mutate. You would have some representation of it, and then as the player moves around you would compute a new position from the old position (and user input), and from there on out you'd just use the new position rather than the old one to refer to the player's position (the old position still exists, though it would obviously be garbage collected).

    However, if you really do want to have mutable things (and sometimes you do), then both the IO and ST monad contains the concept of mutable references that you can use. And as you say, you would update this reference cell using pure functions

    aL_ wrote:
    
    ah.. so functional programming is dependant on this lower level that does all the dirty work of mutable state so to speak?  and this bottom layer does not have to be the os right? it can be a bunch of c# methods?


    Yes exactly. At some point the program actually needs to do  something to be useful. This is why I said that purely function doesn't really mean "no mutable state ever" anymore, it just means "principled use of mutable state, that is visible in the types" (see below).
    The key issue, though, is that there is some way of distinguishing between the "actions" that can model impure concepts (like mutable state) and functions that are guaranteed to be referentially transparent.

    aL_ wrote:
    
    ok.. when you say "visible in the types" do you mean that some other type is returned if the function has a side effect?


    Exactly. Take the following:

    getLine :: IO String
    aLine :: String

    getLine has the "IO" type constructor, so we know for a fact that getLine may do IO, and furthermore there is no way of extracting the value of getLine from within a pure function (you can only do it from within another IO action). aLine is just a string however, and you can be sure that aLine will never ever change, it will always refer to the same string.
    So the types make it absolutely clear where side effects are.

    aL_ wrote:
    
    so there is mutable state in functional programming but its very tightly encapsulated?


    Precisely. You can use mutable state where you need it, but you can never use it by accident. The languages keep pure things and impure things entirely separate, and the type system tells you exactly what kind of effects various things have.

    This means, for example, that the compiler can be certain that a given function can be executed safely on a different thread (because it is referentially transparent, so it can not possibly interfer with anything else). In C# the compiler can make no such assumptions. Take the TPL for example (see channel 9 videos), it absolutely relies on the fact that tasks that you pass it won't interfere with each other, but it has no way of letting you know if this is not the case! So there's ample opportunity to make devastating concurrency errors there (and we all know how fun those are!), and the compiler can't help you, because the language has no way of recognizing which functions have side effects, and which don't.

    Another place where this is crucial is for STM (see channel 9 vids on that). Basically it has to do with transactions for communicating between threads with mutable state (yes, mutable state in Haskell!), if a transaction fails it may have to be retried later, but if the transaction contains a call to "launch_missiles", then running that transaction again could have catastrophic international consequences! In Haskell there is thus a restriction that only functions can be called within a transactions, no IO actions, which completely avoids such problems (with static guarantees).

    Other benefits include things like the nested data parallelism, where many of the crucial steps for getting this extremeley promising way to write parallel applications fast relies entirely on purity. It can not work in an impure setting (see Simon Peyton Jones et al. on NDP in Haskell for that; there's a google video talk about it too).

  • Allan LindqvistaL_ Kinect ftw
    sylvan wrote:
    
    So you see these "actions" can actually use pure functions (but pure functions can't use actions). So a functional program has this "layer" at the bottom that consists of these actions, that every now and then call into the purely functional "core" of the application (where all the algorithms are).

    ok so its these "actions" (that seem an awful lot like functions with the exception that they can be impure) that in a sense "drives" the program? can one se the program as one big action that combines a bunch of actions and sometimes call pure functions?
    in a way it seems as this bottom layer is actually at the top Smiley but i might be wrong Smiley
    sylvan wrote:
    
    the second assembly calls impure functions (in the first assembly) so it too would also need to be impure.

    Calling an IO function is impossible from a pure function. The only thing you can do with IO actions is to combine them into larger IO actions. Which means that the only place you can use an IO action like getClockTime, is from within another IO action.

    runST has a really clever type based on something called "rank 2 polymorphism" which ensures that it's impossible for any of the "mutable stuff" to "leak out"
     

    now thats something i find completely confusing.. how can it be that
    a. calling a impure thing makes the caller impure
    and..
    b. impurity can be encapsulated

    it seems like i cant call an inpure thing because then i become impure(pun not intended:P). then noone can call me because then they also become inpure.. it seems to me that the impurity whould spread to the whole program Expressionless
    i cant call an io function because then i (the caller) become an io function and so does anyone that calls me, so the whole program becomes an io function :O

    is it this "rank 2 polymorphism" that prevents that from happening?
    sylvan wrote:
    
    It's a bit like XML in C#.  In Haskell the language has no native notion of IO, but you can use libraries to produce IO "actions". The compiler happens to know about these actions, and produces assembly code to "interpret" them. The end result is that we get the purity (after all the Haskell code just produces values, just like C# code just produces text strings that happen to confrom to XML, and neither knows nor cares that these values happen to represent IO actions, as opposed to cooking recipes or whatever).
     

    ok, i think i get the concept.. but i still dont really understand how these actions can be "executed" by a pure function without the execution function becoming inpure Expressionless but that has something to do with that rank 2 polymorphism, correct?
    how you actually do this in practice is also still a bit of a mystery but im going to research that Smiley
    sylvan wrote:
    
    Well what makes you think it needs to be mutable? Is a position really mutable? Does a positon disappear just because you move to a different position?
    In a purely functional way the position wouldn't mutate. You would have some representation of it, and then as the player moves around you would compute a new position from the old position (and user input), and from there on out you'd just use the new position rather than the old one to refer to the player's position (the old position still exists, though it would obviously be garbage collected).
     

    if the players position is not mutable (that is, cant be changed) how can he move around? no the position doesnt disappear but when i ask a player where he is(by callign something like getCurrentPosition()) he will awnser diffrently before and after he moves. doesnt that break referential transparancy? but i do understand what you mean when you say that he has a new position, he is no longer at his old position but his old position has not changed, he now has a new position so the position it self is not mutable.. but his, and the worlds, reference to what position the player is located must have changed right? you would have to assign his new position to something in order to keep track of it right? and that something surely must be mutable?
    sylvan wrote:
    
    (yes, mutable state in Haskell!)

    <rant>then why cant everyone just stop saying that there is no mutable state in haskell or functional programming in general? its so very confusing Smiley there is very controlled mutable state but there is mutable state(right?)
    </rant>
    sorry just had to let of a little steam at the functional programming community in general Smiley nothing personal
    sylvan wrote:
    
    In Haskell there is thus a restriction that only functions can be called within a transactions, no IO actions, which completely avoids such problems (with static guarantees).

    ok i think i understand.. but isnt that a little limiting at times? isnt transactions mostly needed when it comes to io? ie: "either something is completly saved or its not saved at all" but in haskell a save operation whould not be allowed in a transaction because its not pure right?
  • aL_ wrote:
    
    ok so its these "actions" (that seem an awful lot like functions with the exception that they can be impure) that in a sense "drives" the program? can one se the program as one big action that combines a bunch of actions and sometimes call pure functions?


    That's correct.

    aL_ wrote:
    
    now thats something i find completely confusing.. how can it be that
    a. calling a impure thing makes the caller impure
    and..
    b. impurity can be encapsulated

    it seems like i cant call an inpure thing because then i become impure(pun not intended). then noone can call me because then they also become inpure.. it seems to me that the impurity whould spread to the whole program
    i cant call an io function because then i (the caller) become an io function and so does anyone that calls me, so the whole program becomes an io function


    Some impurity can be encapsulated. It's true that if you call an IO action you too must be an IO action. It's "contagious", you could say. That's why you should be careful to structure your program so you have a thin "layer" of IO at the bottom, and then this IO layer calls into your pure "core" of the program.
    No well designed program mix IO and algorithm anyway, so it's not really a big problem, in my opinion, but the system will force you to keep this separate.

    Mutable references (the ST monad) can be encapsulated, in that you can "run" the action and turn it into a normal function. But IO has no "run function".

    aL_ wrote:
    
    if the players position is not mutable (that is, cant be changed) how can he move around? no the position doesnt disappear but when i ask a player where he is(by callign something like getCurrentPosition()) he will awnser diffrently before and after he moves. doesnt that break referential transparancy?


    Well variables never change their meaning within the currrent scope. If you call a function that does something depending on the player's position, than this position can be different every time you call the function. Basically imagine something like this.

    movePlayer :: GameState -> Input -> Gamestate

    I.e. the function takes the "state" of the game, and the input (which is passed in by the IO layer) and computes a new game state. The old game state still exists if you want to use it, this function just computes a new game state from the old one. Then next frame you do this again, and again and again, and each frame you compute an entirely new game state from the previous one. You never need to mutate anything. Now under the covers the compiler will probably reuse most of the gamestate and not actually make a full copy each frame (as that would be slow), but semantically there is no reason to start thinking in terms of mutation. (you'd also have a function like "renderGame :: GameState -> IO ()" that you have to call each frame)

    aL_ wrote:
    
    <rant>then why cant everyone just stop saying that there is no mutable state in haskell or functional programming in general? its so very confusing there is very controlled mutable state but there is mutable state(right?)
    </rant>
    sorry just had to let of a little steam at the functional programming community in general  nothing personal


    Probably because this actually used to be true before some smart people figured out that monads could be used to do all sorts of "impure" things in a pure setting. Also because it's a nice short description of something that's bound to be incomplete when you look at the details.

    aL_ wrote:
    
    ok i think i understand.. but isnt that a little limiting at times? isnt transactions mostly needed when it comes to io? ie: "either something is completly saved or its not saved at all" but in haskell a save operation whould not be allowed in a transaction because its not pure right?


    Well the whole purpose of the STM monad is to write to mutable variables with a transactional semantaics for concurrency, so you can do those things becayse the STM monad guarantees that those things can be rolled back without causing problems. But aside from the specific things that the STM monad gives you (reading and writing to transactional variables) you're not allowed to mix in anything impure (like IO). I think there are plans to add hooks for transactional IO though, so you could write DB access etc. too. Not sure about that, but right now it's primarily about avoiding locks for concurrency.
  • Andrew DaveyAndrew Davey www.​aboutcode.​net

    Will we ever see a Microsoft backed version of Haskell for .NET?

  • I'm a C++ programmer who has been interested in learning what's behind all the recent "fuss" about functional languages.

    I no longer "fear" monads, but I am now quite perplexed about why they're even worth discussing! I am willing to accept that I'm totally missing the boat here, but please do help me understand...

    Let's see if I can paraphrase one example from the video:

    we have two functions:

    f: a -> Ma
    g: a -> Ma

    and we wish to compose them, but alas one cannot pipe output from one straight into the other. So we define a "bind operator" that lets us do this; we use some fancy formatting and rules to describe a lamda function that inside it deals with the different input/output types in order to get the two functions interacting happily.

    Er, do we need functional programming to do this?

    It seems like in any language/paradigm that I can very simply create a new "ordinary" function:

    h: Ma -> a

    responsible for stripping off the M and compose the two quite quickly. Is this approach not equivalent? What makes "bind" so special? For example:

    struct M { int data; foo meta; }

    M f( int ) { ... }
    M g( int ) { ... }
    int h( M ) {... }

    M compose_fg( int x ) { return g( h ( f( x ) ) ); }

    So where have I gone wrong Smiley ?
  • El Guapo wrote:
    
    Er, do we need functional programming to do this?


    Nope. As witnessed by LINQ in C# for example. This happens to be even more useful in pureley functional languages at it solves the "problem" of impure things like IO and side effects too, but there's nothing stopping you from using it for some of its other benefits in an imperative language.


    El Guapo wrote:
    
    It seems like in any language/paradigm that I can very simply create a new "ordinary" function:

    h: Ma -> a

    responsible for stripping off the M and compose the two quite quickly. Is this approach not equivalent? What makes "bind" so special?


    Bind is special because it can compose a monadic value M a, with a function a -> M b, and produce an M b. We can thus have a totally opaque monadic value M, (i.e. we may not necessarily have a way of actually *directly* accessing its internal structure outside of the bind function -- sometimes we don't want a function of type  M a -> a, for the same reason classes have private members), yet we can still use it in a controlled way by binding the value that is "wrapped" by the monad on the left hand side, to the parameter of the lambda on the right hand side in >>=. So the user of our monad can still directly access the content of the monad by binding a lambda to it, and doing whatever they want to do in the body of the lambda, but the author of the bind function has lots of power to determine precisely when and how this lambda gets called.
    Note that this does not need to be a trivial operation. In the list monad, for example, the bind operator will not just pass along a single value to the lambda on the right, but every single element of the list (it then concatenates the result to end up with a list, rather than list of lists).
    For example, the following Haskell code demonstrates a simple use of the list monad:

    [1,2] >>= (\ x -> [3,4] >>= (\y -> return (x,y)))

    Will give
    [ (1,3), (1,4), (2,3), (2,4) ]

    x is bound to every element of the first list, and then for each of those elements y is bound to every element in the second list, then in the innermost lambda a pair between these two elements is returned.


    So it's not just a matter of simply unwrapping a value trivially, there can be much more logic behind it. (btw, the list monad starts to sniff around in the area of why LINQ is monadic).
    Another example is a logging monad, where the >>= operator is responsible for passing along a logging output, so that the code you write doesn't need to worry about threading along some logging data through all its functions, that can be hidden by the bind operator.

    I should mention something that has perhaps been neglected. As you sort of point out, there's nothing too special about monads. You can easily envision the list monad example above being independently invented in any number of shapes and forms for various programs. The neat thing is that if you do recognize it as being a monad, and implement the bind and return function, then you get to reuse lots of functions already written that work on any monad (and perhaps even more important from a convenience standpoint, in languages like Haskell and now F# you get to use the nice monadic syntax for your library if you can express it monadically).
    Consider, for example, how you would write a logic truth table of arbitrary size. Well with the list monad we just want to "shove"/"bind" each element of the list [True,False], N times in sequence, which will give us every combination of True and False for N variables. As it happens, there's a standard function called replicateM that will help here (it just replicates a monadic action N times, and returns a list of the results of "shoving"/"binding" the monadic values in sequence). So if you understand the list monad, then constructing a truth table is trivial:

    truthTable n = replicateM n [True,False]

    This produces a list of rows, where each row is a list of Bools of length N.

    So, one of the benefits of bothering with monads, is that we can write all sorts of cool functions that work over any monad, which helps us avoid boilerplate code. Needless to say, replicateM will do something entirely different for IO monads, parser monads, etc. etc. The reason it can be used for vastly different problems is because it operates over a common structure (with the overloaded methods >>= and return), monads!
  • El Guapo wrote:
    I'm a C++ programmer who has been interested in learning what's behind all the recent "fuss" about functional languages.

    I no longer "fear" monads, but I am now quite perplexed about why they're even worth discussing! I am willing to accept that I'm totally missing the boat here, but please do help me understand...

    Let's see if I can paraphrase one example from the video:

    we have two functions:

    f: a -> Ma
    g: a -> Ma

    and we wish to compose them, but alas one cannot pipe output from one straight into the other. So we define a "bind operator" that lets us do this; we use some fancy formatting and rules to describe a lamda function that inside it deals with the different input/output types in order to get the two functions interacting happily.

    Er, do we need functional programming to do this?

    It seems like in any language/paradigm that I can very simply create a new "ordinary" function:

    h: Ma -> a

    responsible for stripping off the M and compose the two quite quickly. Is this approach not equivalent? What makes "bind" so special? For example:

    struct M { int data; foo meta; }

    M f( int ) { ... }
    M g( int ) { ... }
    int h( M ) {... }

    M compose_fg( int x ) { return g( h ( f( x ) ) ); }

    So where have I gone wrong Smiley ?


    Once you strip off the "M," you've lost the data (or side effects or whatever) you're trying to track. That's why "return" and "bind" get you into the Monad, but there's no way to get out of the Monad. Once you're doing clock-arithmetic (mod by 12), you can't get back out (was that "5" originally a "17?")

    In Functional Programming (FP), all data must ultimately be tracked in function arguments. That's why >>= is called "bind," because it "binds" function arguments implicitly inside the Monad.

    In C++, we track data in blocks of memory that we alloc, free, poke and peek as needed. This is so ordinary that we do it without thinking (I programmed in asms, C and C+++ for 25 years Smiley, and it works so well that it may just seem completely stupid to get rid of it and replace it with another twisted, purist discipline... Why ON EARTH would anyone do anything a different way??? Well, the "C-ish" discipline has the sequential, Von-Neumann machine architecture baked into it. We have no choice but to do something else when confronted with:

    1. Distributed, concurrent systems, where we can't access all memory by pointers

    2. Transforming programs themselves on-the-fly, requiring referential transparency -- that is, the guarantee that a program means the same thing no matter where and when you encounter it. This is a scenario of refactoring, dynamism, embedded domain-specific languages, and compiler writing (these things are, of course, possible in C++, but one ends up re-inventing large chunks of functional programming and one would be better off just starting with it good and proper at the outset)

    3. Controlling complexity -- functional programs fit into a small, straightforward mathematical scheme, thus can be completely understood and "reasoned about." It has proven to be much more difficult to reason about programs with pointer aliases. Reasoning about programs means we know all the time exactly what we're doing and expect many fewer surprises during debug time.

    4. Software engineering -- functional programs are just shorter than the equivalent imperative programs. Shorter programs are better programs. Functional programs are also much more modular, therefore much easier to split into reusable parts and also to distribute over multiple machines

    It goes on ... I hope I've helped

  • sylvan wrote:
    
    I should mention something that has perhaps been neglected. As you sort of point out, there's nothing too special about monads. You can easily envision the list monad example above being independently invented in any number of shapes and forms for various programs. The neat thing is that if you do recognize it as being a monad, and implement the bind and return function, then you get to reuse lots of functions already written that work on any monad (and perhaps even more important from a convenience standpoint, in languages like Haskell and now F# you get to use the nice monadic syntax for your library if you can express it monadically).


    In fact, you can create your own LINQ bindings -over-whatever by simply implementing the Standard Query Operators (see, for example, http://www.hookedonlinq.com/Default.aspx?Page=StandardQueryOperators . These are just static "extension" methods -- methods whose first argument is an instance of a certain type. Then the C# / VB / F# syntax will "just work" over your new types because the languages recognize these kinds of methods.

    You will have implemented your own monad, plus lots and lots of higher-order convenience operators, with LINQ! I would recommend starting with the trivial monad -- the monad that does nothing but adhere to the type discipline. But Sylvan has given you enough meat on the bones that you should be able to implement your own list monad and see the result of your equivalent to

    [1,2] >>= (\x -> [3,4] >>= (\y -> return (x,y)) )
  • Great video! As it turns out this whole thing "theoretically" is not scary.

    In practice though I am confused.
    (the previous video with Monads that BB did on channel 9 was really interesting. I remember that he had told in that video about the clock analogy and I remember trying to figure out what he was talking about. I now do understand the concept but to be 100% honest I simply don't get the code stuff.)
    I just admire the concept of treating functions the same way as variables/data.

    I guess that there is a trilogy there that needs to be closed. One more video about Monads, Plz. That's an order Big Smile
  • Brian Beckman, the Gandalf of Programming!
     
    is it possible to mix c# and f#? (I'm new to this, obviously)
  • I understand monads better now. Once I understand them "fully", perhaps it's time to move onto: arrows? http://www.haskell.org/arrows/. Also - is monads really "the" way to structure libraries?
  • William Staceystaceyw Before C# there was darkness...

    Great.  However still confused about real world and how this removes issues with shared state and locking.  So, for example, I have 1 Customer object (Name and OrderCount) that is shared today in my program.  Say I have a shared static ref to it and threads in my program take a lock before reading or writing any properties on this object.  So how does this look with functional programming.  I assume it involves making ctor copies, but don't see the light yet.  Small example would help.  tia

  • Apogeum wrote:
    is it possible to mix c# and f#? (I'm new to this, obviously)


    Yes, since both C# and F# are full "citizens" of .NET, you can call methods in one from the other and back very easily.
  • esoteric wrote:
    Also - is monads really "the" way to structure libraries?


    I would say that "composability" or "compositionality" is THE way to structure libraries. Monads aren't the ONLY way to build-in compositionality from the outset in a design, but they're a very common pattern, broadly applicable, fully understood.

  • Here are a couple of great, motivational papers on functional programming in general and Haskell in particular

    "Why Functional Programming Matters,"

    http://www.md.chalmers.se/~rjmh/Papers/whyfp.html


    "Why Haskell Matters,"

    http://www.haskell.org/haskellwiki/Why_Haskell_Matters

  • Thats great!

    I still don't think I understand monads and functional programming, my lack of math background is complicating things. But I totally get the complexity built on simplicity to ensure composability part, that is kind of my whole bid.

    simplicity meaning: "easy to decide" as opposed to "easy, because I don't have to decide". so I guess functional programming will still require a very particular mindset to be successfully employed.

    I am really looking forward to seeing more great interviews like this one.

  • CharlesCharles Welcome Change
    Apogeum wrote:
    Thats great!

    I am really looking forward to seeing more great interviews like this one.



    You can bet on it. Next up will be a deep dive into pure functional programming (and why, at least from the interviewee's point of view, it's the only way to go) with another expert in the domain and great personality.

    Stay tuned.
    C
  • A possible supplement to Brian's excellent introduction to Monads.

    The n-Category Café

    featuring YouTube videos.
  • "All nouny, no verby"

    Cool!

  • esoteric wrote:
    A possible supplement to Brian's excellent introduction to Monads.

    The n-Category Café

    featuring YouTube videos.


    Superb!!!
  • Hi guys,

    Both Brian's F# interview and Joe Armstrong's Erlang interview(from JAOO) have got me really interested in functional programming.


    So I'm looking to get started with functional programming and Erlang and F# have both sparked my interest equally ... So; I'm kind of stuck on which one to study?


    Has anyone got any decent pro/con lists for me to take into consideration? As, unfortunately (or, most probably, fortunately) the web isn't littered with Erlang Vs. F# wars, the way it is with C# Vs VB.NET, Java Vs. C++, etc,etc, for me to browse.

    Cheers Smiley

  • Beta wrote:
    

    Hi guys,

    Both Brian's F# interview and Joe Armstrong's Erlang interview(from JAOO) have got me really interested in functional programming.


    So I'm looking to get started with functional programming and Erlang and F# have both sparked my interest equally ... So; I'm kind of stuck on which one to study?


    Has anyone got any decent pro/con lists for me to take into consideration? As, unfortunately (or, most probably, fortunately) the web isn't littered with Erlang Vs. F# wars, the way it is with C# Vs VB.NET, Java Vs. C++, etc,etc, for me to browse.

    Cheers



    To get into full, pure functional programming: the only game in town is Haskell (www.haskell.org Smiley The web site is loaded with tutorials and samples and other beautiful stuff. If you have the time to dig in and learn, it will be intellectually transforming and rejuvenating. But there's no bridging the chasm back even to the partly-functional world of .NET (getting into Haskell is like getting into a Monad -- you can't get out [pun intended]).

    C# 3.0, VB 9, and F# all three now have first-class functions (lambda expressions), so you can experience many of the benefits of functional programming in any of those three, and F# has a syntax and a "look and feel" closer to Haskell, especially for purely sequential programs. If you've done any Scheme programming, then you will breathe a sigh of happiness with any of these three languages, now with LINQ and lambda. I don't have any experience with Erlang, so I can't chime in there.
  • Thanks for great video, it has been best introduction to monads I've seen so far! Most of the "monad tutorials" really suck and they all forget about the outer lambda in composition Smiley Composability is explained perfectly, however composability could be achieved by just having a proper type signatures! Imagine you have semigroup: set of objects and associative operation. You don't have identity, but you still closed against composition. So it is not clear what's the purpose of "monadic rules" - why associativity and identity are required? What changes if (>>=) is not associative? What can't be achieved when there is no 'return'? I hope it will be a good be a good theme for a next lecture.
  • Hello Guys,

    Great video! It's always great to see Brian.

    (Brianbec fans! Do not miss Brian's interview at
    http://channel8.msdn.com/Posts/Brian-Beckman-I-have-an-idea-then-what
    Smiley

    May I ask a question? How many real-world applications are there that are based on functional programming? I heard somewhere that functional principles appear in Google Maps and Google Earth. I'm not sure if it's true or not... I just mentioned. It'd be nice to learn more about some real examples.
  • A good paper about monads is actually by Simon Peyton Jones from Microsoft:
    http://research.microsoft.com/Users/simonpj/papers/marktoberdorf/
    You have to have some Haskell knowledge though..
  • dottedmag wrote:
    Thanks for great video, it has been best introduction to monads I've seen so far! Most of the "monad tutorials" really suck and they all forget about the outer lambda in composition Smiley Composability is explained perfectly, however composability could be achieved by just having a proper type signatures! Imagine you have semigroup: set of objects and associative operation. You don't have identity, but you still closed against composition. So it is not clear what's the purpose of "monadic rules" - why associativity and identity are required? What changes if (>>=) is not associative? What can't be achieved when there is no 'return'? I hope it will be a good be a good theme for a next lecture.


    These are great questions. The identity is in there so we can "lift" values into the computation (that is, the monad) without changing them. No return, no general way to get into the monad in the first place. 

    Associativity makes monads good for modeling ordered collections like lists. It would be terrible if [a] ++ ([z] ++ [c]) weren't guaranteed to be the same as ([a] ++ [z]) ++ [c]. Imagining a ++ operator that eats its left-hand side if it's out of lexicographic order, then

    [a] ++ ([z] ++ [c])  =  [a] ++ [c]  = [a, c]
    ([a] ++ [z]) ++ [c]  =  [a, z] ++ [c]  =  [a, z, c]

    This bogus ++ is not monadic because it's not associative, and it's a good thing to exclude it from our general (but not too general) discipline.

    There is probably a deeper reason (hello, categorists Smiley, but this is what popped to mind.
  • akopacsi wrote:
    Hello Guys,

    Great video! It's always great to see Brian.

    (Brianbec fans! Do not miss Brian's interview at
    http://channel8.msdn.com/Posts/Brian-Beckman-I-have-an-idea-then-what
    Smiley

    May I ask a question? How many real-world applications are there that are based on functional programming? I heard somewhere that functional principles appear in Google Maps and Google Earth. I'm not sure if it's true or not... I just mentioned. It'd be nice to learn more about some real examples.


    Since you mentioned Google, map-reduce, which is the backbone of their massively distributed, parallel infrastructure, is a composition of "reduce", which applies some given aggregate, associative calculation like sum or average or standard-deviation to a recursive data structure; with "map", which applies arbitrary transformations to a recursive data structure. It's about as functional and real-world as it gets.

    There are tons of other examples. Some in the financial quants industry are using Haskell programs to model derivatives yields. I know the crypto community is going whole-hog for Haskell. Don's blog (http://blogs.msdn.com/dsyme/ ) will be a great place to look at real-world applications of F#. Erlang is all over the communications industry. It's endless Smiley

    EDIT: I wanted to add a side comment: one reason some mission-critical applications are going with Haskell is that if you go "all the way" to pure functional programming, where all side-effects are accounted for in type signatures, then more proofs of correctness become available. In certain cases, this is required.

    But, on the downside, you can't just plunk down a "print" statement any-old-where to help you debug a Haskell program. Any computation that might EVER print MUST be in the IO monad. So if you're ever going to plunk down a print, you must plan ahead for it or rewrite for it. In the "impure" world of .NET, with C#3.0, VB9, and F#, you have many of the benefits of functional style, but you can "cheat" where necessary or useful and do side effects from anywhere.
  • I have write a Monad thing in C#. This Monad has a private field that I have not access to, but I can change it! :O And that's the interesting thing about it Tongue Out. Once I have pushed a data into it; There is no way out of it. But I can still modify the data in the Monad. Firts let's look at the stuff:

    public class Monad<T> : IMonad<T>
    {
       private Monad (T inner) { this.__InnerHolder = inner; }
       private Monad () { }

       private readonly T __InnerHolder;
       private T Inner { get { return __InnerHolder; } }
       public static IMonad<TInner> Unit<TInner> (TInner inner) 
       {
    return new Monad<TInner> (inner); }

       public
    static Func<A, IMonad<R>> Lift<A, R> (Func<A, R> fun)
       {
          if (fun.IsNull ()) 
             
    throw new ArgumentNullException 
                (
    "fun", "'fun' can not be null.");
          return (a => Unit<R> (fun Angel));
       }

       public
    static IMonad<B> Bind<A, B> (IMonad<A> a, 
          
    Func<A, IMonad<B>> fun)
       {
          if (fun.IsNull ()) throw new ArgumentNullException ("fun", "'fun' can not be null.");
       return Unit<B> (fun (a.Monad.Inner).Monad.Inner);
       }

       #region
    IMonad<T> Members
       Monad<T> IMonad<T>.Monad
       {
          get { return this; }
       }
       #endregion
    }

    public
    static partial class MonadExtra
    {
       public static IMonad<TInner> Unit<TInner> (this TInner inner) 
       {
    return Monad<object>.Unit<TInner> (inner); }

       public
    static Func<A, IMonad<R>> Lift<A, R> (this Func<A, R> fun)
       {
          return Monad<object>.Lift<A, R> (fun);
       }

       public
    static IMonad<B> Bind<A, B> (this IMonad<A> a, 
          
    Func<A, IMonad<B>> fun)
       {
          return Monad<object>.Bind<A, B> (a, fun);
       }
    }

    public
    interface IMonad<T>
    {
       Monad<T> Monad { get; }
    }

    Now back to work. First I push a string into my Monad and I want, at the end to have byte collection in my Monad that represents the charachters of my string. How about this:

    var var1 = "ABCD...MONAD!"
       .Unit ()
       .Bind ((
    string s) => s.ToCharArray ().Unit ())
       .Bind (ca => (
    from c in ca select Convert.ToByte (c)).Unit ());

    What is the type of var1? It is IMonad<IEnumerable<byte>>. And you see another interesting aspects of Monad here: Composability! Here we have 3 type for the type variable of our Monad. But as long as we are IN the Monad we can compose any kind of computations.

    Now assume we have two Monads and we want to apply a function to the inner value of both monads. How can I pull out the values from these Monads to perform my function application? I can not pull out the inner part of monads. But i can still use their values in my computation by using closures! For example I have to Monads that contains numbers and I want to have the multipluy value of them:

    var ma = 10.Unit ();
    var mb = 20.Unit ();

    var
    mr = mb
       .Bind (x => ma.Bind (y => (x * y).ToString ().Unit ()));


    As you can see (actually as I can see! Because writing this code make me feel a lot better about Monads - as an Imperative programmer - yet there are a lot of improvements possible to my Monad set of utilities) anything that we do in normal imperative code is possible by using this Monad. What is the good part? You can not change the Monad anywhere out of the Monad. So actually the state of you whole program that is out of the Monad is untouched. Another thing is that the Monad is immutable. Every action on a Monad produce a new Monad of that kind.
    Consider a data structure that has Monads as members. You can not modify these members! Even if the data structure itself is not a Monad! See this little bit of it:

    var
    nv = new { a = 10.Unit (), b = 20.Unit () };
    var mnv = nv.Unit ();
    mnv = mnv
       .Bind (j =>
    new
                a = j.a.Bind (k => (k * 2).Unit ()), 
                b = j.b.Bind (k => (k * 2).Unit ()) }.Unit ());

    It was very interesting to me that by putting a Monad of this type as a member of the root class of a hierarchy; that hierarchi of classes became immutable! (At least for changing that member!)



     

  • Very nice, indeed, Kaveh!
  • Brian,

    I very much agree with your observation that bind is not a very good choice when you want to explain the nature of the monad laws (i.e., the monoidal structure).  Instead (as you show), it's the Kleisli composition that makes it obvious.

    I wish more people would use your line of explanation since it makes the monad laws look very natural.

    BTW, I think strings are a good example of a monoid than everyone has encountered.

      -- Lennart

  • brianbec wrote:
    Very nice, indeed, Kaveh!

    Thankyou! I have been more than happy by your comment!
    Just I'v discovered a bad thing in my code: using Closures! I discovered this when using a LINQ query against a set of objects (a library for handling directory tree of my html templates for some work and extracting their content and publishing via a HttpHandler). I have changed one of objects that was placed inside the where clause somewhere (in another business class - ofcourse that was a bug; but what if it wasn't?), and the resault was not what expected!
    So I write another method for combining two monads and getting a third one; named Transform (I do not know this name is proper. I am a "thin client" of Haskell and Monads!).
    It seems now a better solution for composing two (or more monads). It seems to me closures are a bad thing in this context, because they pass the boundries whitout any footprints.
    (Code is here.)
    Cheers!
  • orcmidorcmid Orcmid as Apparition
    brianbec wrote:
    Here's another little twist ...
    Eager evaluation is, in fact, a side effect!  Think about it: eager evaluation means that you KNOW function arguments get evaluated BEFORE the function gets called. So, a PURE functional language must be a lazy language (more technically, "non-strict," and sorry for the confusing lingo: STRICT means non-lazy).


    BZZT.  My non-sequiter detector went off right here, Brian.  I think this is over-reaching, as is the later comment about what laziness provides.  I'm not completely convinced that the distinction is merely a matter of taste, but I'm also not persuaded that a "pure functional" notion that abandons the well-definedness condition of common mathematics is an appropriate alternative.
  • orcmid wrote:
    
    brianbec wrote:
    Here's another little twist ...
    Eager evaluation is, in fact, a side effect!  Think about it: eager evaluation means that you KNOW function arguments get evaluated BEFORE the function gets called. So, a PURE functional language must be a lazy language (more technically, "non-strict," and sorry for the confusing lingo: STRICT means non-lazy).


    BZZT.  My non-sequiter detector went off right here, Brian.  I think this is over-reaching, as is the later comment about what laziness provides.  I'm not completely convinced that the distinction is merely a matter of taste, but I'm also not persuaded that a "pure functional" notion that abandons the well-definedness condition of common mathematics is an appropriate alternative.


    Yup, I was wrong about this. When I tried to prove it, I just ended up with the State Monad. I WAS WRONG !!!! Smiley
  • Brian writes on the board:

      1. g: a -> Mb
      2. f: b -> Mc
      3. \a -> (g a) >>= \b -> (f b)

    but it seems to me that one of two things must be true, either:

    The "->" notation means something different in lines 1 and 2 than it does in line 3.  In line 1 and 2, "g is a thing (a function) with a type such that it takes a thing of type a and returns a thing of type Mb".  In line 3, I don't see how this same interpretation pattern fits over line 3, even with the grouping "\a -> [(g a) >>= \b -> (f b)]".

     OR, and more likely because I doubt he wouldn't have noticed:

    They do mean the same thing but that there is some even more advanced meaning of the "->" operator, probably something to do with meta-types or kinds or something that applies correctly to each of the lines when understood correctly.

    Or, maybe the question is, What is the Bind or Shove operator doing?  Is it sheding some metadata surrounding b or c in kinds Mb or Mc and returning the b or c type?  Is it de-generalizing a generic Mb or Mc kind??

    Sorry, I went to public school but I'm really trying Embarassed.

    Maybe listing out how this functional or F# syntax is to be read would help as well.  eg.  "g: a -> a" is read as "g is a thing, a function, which takes a thing and returns a thing".  Even that may be incorrect.  Stay as general as possible.

  • Hey Brian, great chalk-talk!  When are you going to take it to the practical with F# work-flows?!?

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.