Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Erik Meijer: Functional Programming

Download

Right click “Save as…”

I caught up with the venerable Erik Meijer recently to discuss functional programming languages, academic versus real-world application of functional programming languages (there are very good reasons why people don't program Windows applications in, say, Haskell...) and, well, Erik. Why does Erik work here? What's his history? 

Of course, as an academic type (he also ships software so he's not purely academic...), Erik is a big fan of strict functional or pure functional, as it were.

This is a deep interview and Erik spends most of the time on the whiteboard.

Enjoy. There's lots to learn in this one...

For those who are bandwidth challenged, here's a low res version for download.

Tags:

Follow the Discussion

  • Oh man I've been waiting for this since Brian's video on Monads. Awesome! Big Smile
  • This doesn't make any sense at all.

    If in order to be truly "pure", why don't we just replace our code with nothing but a bunch of constants?

    We could then go around saying I write constants instead of I write code.

    I don't understand the thinking that in order to be truly pure, it has to return the same value.  Why can't it be pure if it returns the data type specified?  I.e., if I say I'm goign to pass you back a string, then you can know for a fact you are going to get a string returned. 

    Why do we care what the actual value is?

    Why would it make sense to have to specify not only the data type returned, but also that it's not a constant value by plaing <IO> in front of it?

    Also, what's up with the Let?  Why can't we just say x = "whatever"

    I think it would be better if you show a sample application, where this type of functional programming language shines.

    When we don't want our "world" to change, why doesn't it make more sense to use the concept of Interfaces when needed?

    I think the problem with the concept of purity and functional programming, is that it's trying to force programmers into a box of how mathmeticians think.    That logic falls apart because the need for a programming language is not simply to solve mathmetical equations.

    To step back and keep it simple, rather than talking about how it works;

    What problems does it solve?

    How does it make programming or the end-result better?

  • Let me answer my own question here a little bit.

    Where I think this would be useful is as a feature in an existing language.

    For example, if you could create a certain type with a certain state.  That way, if you have that special type you also know it has a certain state, or a certain value.  The equivalent (somewhat) of saying what Erik said in his video of you know you're going to get a certain type and a certain value, but really I mean state.

    This would be beneficial in the case of a string for example.  Let's say you create a new stringempty type, where whenever you get this value you back you know it will never be nothing.

    Why would this matter?

    It would eliminate all of the exceptions for people who don't check if string is nothing before trying to call the Trim() method on it for example.

    Imagine how many exceptions this would elimnate and how many lines of code it would eliminate.

    The bottom line is I don't care if a string is nothing, I just want to get the value, trim it, and see if it's equal or not equal to a certain value.

  • Why functional programming? See Joe Duffy's video on ParallelFx here on C9 and Simon Peyton Jones' papers on Software Transactional Memory on the MS Research site for a couple approachs to improving the concurrency of applications in this multi-core age we live in.
  • I've watched all of the related videos.

    However, I'm assuming that you reference reading other material because you yourself don't truly understand the why.

    If you did, you could have explained it.
  • FP is a toolset that makes some problems easier to tackle and some problems harder to tackle.

    You'll get better at your craft as you take the time to learn your tools, what they're good for and what they arent.

    I think what is often taken for granted and omitted in these conversations is what real world scenarios certain techiques shine

    Here are some places where functional programming concepts are put to good use:

    ++++++++++++++++++++++++++++++++++++++++++++++++
    A nice use of Monads to model async queries
    http://www.aboutcode.net/2008/01/14/Async+WebRequest+Using+LINQ+Syntax.aspx
    Based on
    http://blogs.msdn.com/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx

    Throw in a Maybe Monad or an Exception Monad and you could get robust error handling in there too.

    ++++++++++++++++++++++++++++++++++++++++++++++++
    Benefits of and a series on immutabability in C#
    http://blogs.msdn.com/ericlippert/archive/2007/11/13/immutability-in-c-part-one-kinds-of-immutability.aspx

    BTW System.String is immutable and that alone removes so many headaches compared to dealing with LPCTSTR or even CString

    ++++++++++++++++++++++++++++++++++++++++++++++++
    The ability to clearly describe and implement certain kinds of computations. (The Monads sample counts toward this)
    http://notes-on-haskell.blogspot.com/2007_12_01_archive.html

    Take a look around and experiment a bit with languages. You dont have to use them but you can be surprised what they can bring to the table.
  • CharlesCharles Welcome Change

    Excellent points, ifeanyie.

    C

  • Ifeanyie,

    Of course.  I would had said the same thing.

    However, a couple of things:

    1) The samples you gave used object oriented languages, not functional programming languages exclusively.

    2) I've watched all of the C9 videos on these functional programming languages.  I only remember one referring to a real world scenario which was in telecommunications.  I even watched a video on youtube when the interviewee was younger and the demo showed them crashing the system, where only one process or thread would die, and it wouldn't bring down the whole system.  How they could make changes live to the system and it would only affect new processes, etc.

    However, that really didn't relate back to the concepts of "purity", etc", of the why someone should use functional programming languages.

    I'm not approaching this with an argument of why not to use functional programming.

    Rather, I'm simply asking someone to explain the reasoning behind FP in a "KISS" way on why it's better than an existing programming language.

    Or, maybe it's not meant for real world use in today's world.  Maybe it's more of theory and bits and pieces are taken to enhance existing languages.

    I'm just looking for clarity on this subject.

  • MetaGunny, it matters because purity is easier to reason about, and it's also easier for the compiler to process (if you want the compiler to help you sort out parallelism and concurrency, it almost always needs your code to be pure -- if the language is pure the compiler can guarantee that this is the case).
    For example if you have a function that writes to a global variable when you call it, then you can not safely perform 100 tasks simultaneously which all use this function - the side effects mean that sequencing matters all of a sudden. And even if sequencing doesn't matter it's very likely that you'll need to protect the global variable with a lock, which destroys parallelism because of contention.

    If you instead write your algorithms in a functional style then not only is it easy to reason about (e.g. if you ever see that a function returns the wrong results given a set of input parameters, then you already have all the information you need to fix the bug, whereas in C++ you may have hours of "quality time" with the debugger before you're even able to figure out exactly what series of events lead to an inconsistent state). It's just more explicit, whereas with side effects everywhere you have all these hidden data flows that can mess up your program but are very difficult to find.
    I work at a games company, so I do use C++ every day, and I would estimate that probably 80%+ or so of all bugs I encounter simply could not happen in a functional language because you'd just write things differently.

    Basically the issue I find is that when you write imperative code where you update state you need to sort of keep a "mental log" of what your code does as you read it. So at any point in the code you have to keep track of what happend before and go "Okay, so if I end up in this branch then X is not null, and Y is greater than zero" etc. etc. It just fills up your mental resources book-keeping all the state changes that you've done to make sure you're doing the right thing at the right time. In a purely functional world all that disappears. All you have is input and output. You combine the input into the output somehow, but you don't have to keep track of N previous operations to do it correctly, you only have to worry about inputs and outputs. You can feel how much more "mental resources" you have to spend on the actual problem, as opposed to book-keeping.

    Also, in an imperative programming language the "meaning" in a program consists of both "values" and "statements". The values can be statically typechecked, but statements can not. In other words, if you swap parameters around in a function call, chances are good that the compiler will tell you about it, but if you swap the order of two statements around chances are good that it will keep compiling happily (and chances are high that the program will not do the right thing anymore). In a purely functional language the meaning is 100% values, so everything gets statically checked. It may not always be enough to ensure no bugs get through, but I think you'd be surprised at how often Haskell programs "just work" when you compile them, even after large changes (which almost never happens in C++!).

    Finally the merits of being able to split effects up into separate monads is notable for example in STM. With STM the "readTVar" etc. operations are all in the "STM" monad (not "IO"), in that monad you can only read and write transactional variables (no other effects). This means that the compiler can statically guarantee that no irrevocable effects happen inside a transaction (which is dangerous because a transaction can be preempted and have to be re-run).
  • Thanks Sylvan.  That clarified it for me. 

    Although I'm sure there are other benefits, I'm presuming functional programming is being talked about more and coming back because of multi-core.

    So, the question is, is this all theory or is there a functional programming language I could start using today and write a massively parrell program in it easily?

  • MetaGunny wrote:
    

    Thanks Sylvan.  That clarified it for me. 

    Although I'm sure there are other benefits, I'm presuming functional programming is being talked about more and coming back because of multi-core.

    Yes that's probably true.
    I guess languages always strive to improve productivity and correctness, and I personally think that functional programming provides enough benefits in these areas. I think Ericsson, from that telecom stuff you mentioned, meassured something like a 20x productivity boost compared to C++ (they switched to Erlang when their C++ implementation collapsed under its own complexity IIRC).

    But yes, it seems like the "killer app" that may finally convince the last remaining skeptics is the observation that it's immensly more suited to parallell and concurrent programming.

    MetaGunny wrote:
    

    So, the question is, is this all theory or is there a functional programming language I could start using today and write a massively parrell program in it easily?



    Yes, Haskell for sure. Download GHC at www.haskell.org/ghc
    It has parallelism annotations (stick a "par" in the code and it will evaluate it in parallel), STM for the shared state concurrency, lightweight threads (you can easily kick of thousands of them - they're almost free and get mapped down onto actual OS threads by the runtime), and it's in the process of getting nested data parallelism, there's experimental code already in GHC (essentially you specify the parallelism on the data structures rather then on the control flow - this is pretty much the most fine grained parallelism you can get, e.g. it's what games use to process vertices and pixels, but Haskell's version will support nested data structures and not just flat arrays, and it will work for arbitrary code, not just shaders -- NDP requires purity btw, otherwise all the flattening and fusion transformations that the compiler needs to do wouldn't work).

    They mention it as "useless" but I think Eric got side tracked from the orginal graph and meant to show (like Simon PJ did in the interview they referred to) how Haskell over the last 20 years have moved "up" into the "useful" area by starting with the pure and elegant (but useless) and gradually shaping it into something that's actually useful. So now we have the pure and elegant foundation, but it's a fully general purpose language that you can use to write real applications (with GUIs, networking, database access, C interop and whatever you can imagine).
  • Very interesting.  Thanks Sylvan.
  • CharlesCharles Welcome Change

    And don't forget about F#. It's a fantastic language that offers a huge amount of usefulness! Check it out.

    C

  • RichardRudekRichardRudek So what do you expect for nothin'... :P
    That was an excellent video.

    As an old-school, impure programmer, I especially liked the way you cleared up what is meant by functional - in my old-school (C++) ways, I kept thinking 'volatile results'... Smiley

    One could argue, though, that your starting premise is what is wrong with regards to the 'contract' each function call has, at least in terms of adding 'functional' features to an imperative language. And I suspect Charles may have thought that as well, which is why I suspect he asked about wether it was worthwhile to be able to 'flag' a call/routine as being 'functional' - f(x) called with the same 'x' returning the same result. Always. That is, we know our (default) implementation of an (imperative) function (routine) is not pure, but look here, this function (routine) is meant to be pure, so please beef up the (static?) analysis, and tell me if this gets broken.


    Anyway, thanks Erik and Charles. Excellent... [A]
  • MetaGunny wrote:
    I've watched all of the related videos.

    However, I'm assuming that you reference reading other material because you yourself don't truly understand the why.

    If you did, you could have explained it.

    No, I referenced them, because they explain some of the benefits of pure functional programming where the minimization of shared mutable state can enable opportunities to simplify the construction of highly concurrent software.

    Not sure why I wasted my time though. You're rude.

  • It's interesting I think to observe just how much 'push back' / hostility there can to be Functional programming from the old-school / career predicated Object Orientated folks. I guess this is exactly what happened when OO was new and folks thought that C++ would never be a viable alternative to C, or going back further that HLLs would ever be a better choice than pure assembly. Of course it’s ironic that FP has been around since pretty much the dawn of computing.

    When trying to convey quickly to folks at my company why I think FP + the .NET Framework is such a incredibly powerful combination I usually direct then to read Erik's foreword to Don Syme's new 'Expert F#' book. Since I probably should quote it verbatim here I can just encourage folks to seek out Erik foreword - it's beautifully succinct and forms a very compelling argument.

  • Charles, I really wished you would have brought up Spec# and asked Eric how that may play into the honest/dishonest comparison. If you used Spec# contracts, couldn't the compiler guarantee honesty so that you could have a mixture of honest and dishonest functions by examining the contracts?

    The thing that left me a little confused is that for the last 2 years we have been hearing how Orcas is bringing in functional programming concepts to C#. Maybe I missunderstood Eric, but it sounds like that is not true and seemed to contradict some of the past C9 videos that were promoting the products.

    This was a great interview though and was very stimulating....classic Channel 9.
  • CharlesCharles Welcome Change
    JAlexandrian wrote:
    Charles, I really wished you would have brought up Spec# and asked Eric how that may play into the honest/dishonest comparison. If you used Spec# contracts, couldn't the compiler guarantee honesty so that you could have a mixture of honest and dishonest functions by examining the contracts?

    The thing that left me a little confused is that for the last 2 years we have been hearing how Orcas is bringing in functional programming concepts to C#. Maybe I missunderstood Eric, but it sounds like that is not true and seemed to contradict some of the past C9 videos that were promoting the products.

    This was a great interview though and was very stimulating....classic Channel 9.


    LINQ is implemented as a monad (a functional construct). Lamda expressions and closures are functional contructs that are first class citizens in C#, an imperative language. We are not advocating that you throw away tools. Rather, add new ones to your toolbox. C# will continue to evolve as will VB.NET and C++. F#, the newest member of the .NET family of languages, is a really powerful functional language that offers all of the benefits of functional plus full force .NET functionality.

    The multi-core problem is not going to be solved by requiring developers to become functional programmers. The Parallel Computing Platform, for example, will be equally accessible from any CLI-based language. Parallel Extensions for .NET are implemented as a manged library and there is a lot more going on in the Developer Division that you haven't heard about yet... Stay tuned to Channel 9 for the details.

    Again, functional programming is one way of writing software and it does not have to replace the ways you are already comfortable with as we head into the highly concurrent future (so, C# will not become a functional language even though it employs mechanisms that are functional in nature...).

    The more tools you have in your toolbox the more you can do.

    Exciting times indeed!

    Keep on learning,

    C
  • From the video, the main point seems to be that there are pure and impure functions. A pure function will give you the same result each time when you pass it the same arguments. An impure function will not.

    Now it seems to me, that there can not be such a thing as a pure function. Because every function takes some time to be executed, so every function has as a side-effect that time has passed, or ticks increased. So if you use Ticks anywhere in your program, as in the example in the clip, all functions would become instantly impure.


    I liked the distinction between honest and dishonest functions. An honest function gives you an indication of wether or not it changes anything other then the parameters passed while a dishonest does not.

    What i liked most though was the explanation of how lambda functions were just a shorthand notation for a kind of class. That made lambda functions alot easier to understand.

    It's a bit strange though that instead of getting to see alot of benefits that you can get from using pure functions, about 40 minutes of the video were actually about the disadvantages of it and why only having pure functions doesn't work.

    Now i'm not sure if this is the right place to ask questions, but i wonder why functions still give a result. When you follow the convention that any result given , can also just be an out parameter of the function. It's easy to see that y = f(x) is the same as f(x,out y). However, when taking c# for example, there are a lot of things i could do in the second form that aren't possible in the first form. Like having multiple out parameters f(x, out y, out z). Or even overloading the function with a different type for the out parameter: f(int x, out int y) and f(int x, out string y). 

    As a second pondering, switching between functional and object oriented, is just passing the object data as first parameter to the object function. So all object oriented programming is also functional programming. It's just grouped together with a bit more intuitive syntax.

    ps: thank you for providing a smaller download channel 9 crew. With a download cap of 12 GB a month in this backwards country, it is really appreciated to be able to get those vids in smaller sizes.
  • (this is a response to people who are befuddled by purely functional programming)

    I can answer your questions with a thought experiment:

    If it's ok for a function to return arbitrary values when you call it with the same parameters, how does this affect the behaviour of other functions? Should 1+1 always = 2? Should Console.WriteLine("Foo") print "bar"? If functions don't behave only according to the parameters you pass in, how do you specify how the function should behave? In other words, how do you program? Further, how can you ever hope to reason about or debug code that changes behaviour arbitrarily?

    This is why current languages provide a mix of purely functional behaviour (like integer addition), and "impure" behaviour which is harder to reason about. The more "pure" the behaviour, the more predictable and deterministic the program is, which means it's easier to understand and debug, and you can gain a higher confidence in its correctness.

    I wrote a post about a number of advanced languages features which I think most people misunderstand or downplay the value of:

    http://higherlogics.blogspot.com/2007/10/on-importance-of-purity.html
  • Here are a couple of questions to consider:

    static class A
    {
        static double f (double x) { return Math.Sin(x); }
    }

    static class B
    {
        static double phi;
        static double f (double x)
        {
            var r = Math.Sin (x + phi);
            phi = Math.Sin(x);
            return r;
        }
    }

    What are the types of A.f and B.f? In .NET languages, both of them have type double -> double, but one of them is pure and the other one isn't, so their *types* say nothing about their purity.

    In Haskell, B.f couldn't be written that way, and would have to be in a state monad (one way or another Smiley, so its type would be
    double -> StateMonad double
    or some such like that, distinct from the type of A.f, which remains double -> double. Thus, the compiler can *help* you compose pure functions like A.f with side-effecting functions like B.f by preventing non-sensical combinations. A programming language that can't tell you the difference between a side-effecting function from a non-side-effecting function can't help you avoid certain goofups.

    Side effects are available in pure functional programming, it's just that the side-effected stuff must be threaded through function arguments and return values, and Monads provide very solid abstractions for doing that.

    It's a real pain, however, to have a beautiful, fully pure functional program, and then you want to plunk a Console.WriteLine down at the lower levels. You end up having to rewrite all the higher layers to be in the IO monad (unless you're an old hand and you foresaw the need Smiley The mixed mode languages, where the type systems "fibs" to you about side-effecting functions having the same types as non-side-effecting functions, are real nice in the plunking department, precisely because they can't TELL that you've just converted your beautiful functional palace into a side-effecting frat house by plunking down a Console.WriteLine.

  • CharlesCharles Welcome Change

    Brian,

    In my humble opinion, that's one of the best posts I've ever read on Channel 9. Love the frat house analogy Smiley

    Thank you.

    C

  • CharlesCharles Welcome Change
    naasking,

    Great blog. Thanks for the link.
    C
  • evildictaitorevildictait​or Devil's advocate
    Brian is a genius, so you might find my example more simple to understand:

    static int doesSideEffects(string text, int a, int b){
      Console.Write(text);
      return a + b;
    }

    static void Main(){
      int result1, result2;

     // statement 1:
      result1 = doesSideEffects("first one!", 1, 2);
     // statement 2:
      result2 = doesSideEffects("second one!", 3, 4);

      Console.Write(result1 + result2);
    }


    Can we shuffle the statement 1 and 2? Can we do them at the same time? The answer is no, because we must guarrantee that the first statement's Console.Write() happens before the second one.

    static int noEffects(string text, int a, int b){
      Console.Write(text);
      return a + b;
    }

    static void Main(){
      int result1, result2;

     // statement 1:
      result1 = noEffects("first one!", 1, 2);
     // statement 2:
      result2 = noEffects("second one!", 3, 4);

      Console.Write(result1 + result2);
    }


    Can we shuffle the statement 1 and 2? Can we do them at the same time? The answer is yes, because result2 does not rely on result1, and the function call has no sideeffects, so it doesn't matter if result2 is computed before result1.



    Some things that a pure (or shall we say a function which is constant with respect to it's arguments) function have that a side-effecting function doesn't have include:

    Let A and B be one or more statements, which contain no side-effects.
    Let AB denote the instruction or set of instructions A followed by the instructions in B.

    1. AB has no side-effects.
    2. If B does not depend on A, then AB = BA
    3. If A is a function that takes arguments c1 ... cn, and c1... cn are known at the time of compilation, then the result of A can be fully determined at compile-time (i.e. never needs to be computed in the program, which makes your program faster).
    4. If B does not depend on A, then A and B can be executed on different threads at the same time.
    5. If A1 ... An are independent, non-side-efffecting blocks, then loops such as

    int[] arr = new int[10000];
    for(int i=0;i<arr.Length;i++){
      arr[i] = noEffectButLongFunction(i);
    }

    Can be split across a number of threads without penalty.

    6. If A is a constructor which is fully disposed before it is ever passed to, or assigned from a side-effecting function, it can be at least partly optimised away, e.g.

    class ComplexNumer {
      int re, im;
      [NoEffects]
      ComplexNumer(int re, int im){
        this.re = re; this.im = im;
      }
      [NoEffects]
      public static operator ComplexNumber +(ComplexNumber a, ComplexNumber b){
        return new ComplexNumber(a.re + b.re, a.im + b.im);
      }

      [NoEffects]
      public double Magnitude {
        // sqrt has no side-effects.
        return Math.Sqrt(re * re + im*im);
      }
    }

    public int Main(){
      double mag = (new ComplexNumber(1,0) + new ComplexNumber(4,3)).Magnitude;
    }

    What optimisations can we do to Main?

    Step1: Let's make everything explicit:

    int mag = Operator+( new ComplexNumber(1,0), new ComplexNumber(4, 3) );

    but because new ComplexNumber() is constant with respect to its arguments, we know at compile time the value of new ComplexNumber(1,0) and ComplexNumber(4,3).

    but Operator+ is constant with respect to its arguments, so we know that it's value can be fully determined at compile time, so we evaluate it to get

    new ComplexNumber(5, 3).Magnitude.

    But .Magnitude is constant with respect to it's arguments, and we know ComplexNumber(5,3) at compile time, so we can evaluate it to be Math.Sqrt(34). But Math.Sqrt is c wrt. args, so we can even evaluate it at compile time!

    So ultimately we have reduced a potentially costly expression like
    public int Main(){
      double mag = (new ComplexNumber(1,0) + new ComplexNumber(4,3)).Magnitude;
    }
    to
    public int Main(){
      double mag = 5.830952;
    }

    entirely at compile time, and thus your program will run faster, and more errors can be caught before you ever ship the program to your customers.

    And less errors for customers and more (provably correct) parrallized code means that your programs will be more reliable and faster. So to sum up, pure functions and functional programming techniques leads to more money and prestige for you. Tongue Out
  • Ooh, ooh, I really like the AB = BA observation ... it reminds me of group theory, where we consider commutators

    [AB] =def= AB - BA

    which are members of the group that *measure* the extent to which two other members of the group, namely A and B, do not commute. If you've solved Rubik's cube, you'll remember such commutators as one of the only two building blocks you need to solve it (the other being PQp =def= PQ(P^-1), the conjugator or "change-of-coordinates" gadget).

    So two statements are perhaps just mutually non-side-effecting if their commutator is zero. They may side-effect a whole bunch of other stuff, but they don't hurt each other.

    In solving the cube, the general strategy is to find some brute-force operation, say Q, that does something you want, but may side-effect the bejeebers out of other cubelets. First, get the cubelets you CAN'T afford to hurt out of the way of the bejeebers with another brute-force op, P, then do the Q you want, then put the cubelets you saved back with a p = (P^-1), undoing all the deleterious side effects!

    If we could UNDO side-effects in programs (Rollback, anyone?), then we could do all the mess we wanted (Q) so long as we got stuff out of the way first (P), then undid the mess (cleaned up the frat house, p=P^-1)

    The Time Warp operating system allowed us to undo side-effects by sending antimessages and canceling things that never should have been done. Hmmm, maybe there's a group-theory accounting of Time Warp available.
  • CharlesCharles Welcome Change
    brianbec wrote:

    So two statements are perhaps just mutually non-side-effecting if their commutator is zero.


    Can I use this as my email signature?Smiley
    C
  • This is really interesting stuff Charles..
    I hope we can have a video with Anders Hejlsberg on things like that...& further any notion of what in future can be added to our toolbox in C#...

    http://programming.reddit.com/info/64th1/comments/c02u9mb

    THis one for guys who are still trying to understand Monads..

    Thx Charles...
  • Christian Liensbergerlittleguru <3 Seattle
    As said and to sum it up: the deal here is to give the compiler hints to optimize the code for you... That's why side effects should be visible in the code.

    Nice video. Well done and good questions Smiley I hope you have more of these in the pipeline, Charles...
  • CharlesCharles Welcome Change
    gaurav.net wrote:
    This is really interesting stuff Charles..
    I hope we can have a video with Anders Hejlsberg on things like that...& further any notion of what in future can be added to our toolbox in C#...

    http://programming.reddit.com/info/64th1/comments/c02u9mb

    THis one for guys who are still trying to understand Monads..

    Thx Charles...


    Absolutely. It's important to showcase various perspectives on interesting topics in programming. I'd imagine Anders et al (including Erik) are busy at work innovating C# V.Next. When we can talk about what they're working on publicly, you can bet we will. This goes for VB.NET, C++ and F# as well. All of these great languages deserve deep-dive and compelling interviews! Certainly, the people who create them are unusually gifted in the art of language design and implementation and we want to showcase them on C9.

    C
  • evildictaitorevildictait​or Devil's advocate
    brianbec wrote:
    
    If we could UNDO side-effects in programs (Rollback, anyone?), then we could do all the mess we wanted (Q) so long as we got stuff out of the way first (P), then undid the mess (cleaned up the frat house, p=P^-1)


    Madness! Sheer madeness! Sounds fun though. I suppose you could implement it by buffering all of the side-effects (store backups of mutated memory in objects, and buffer external requests like writes etc), although there are some side-effects that you would never be able to do:

    (Assume, for argument's sake that atomic blocks are rolled back when any exception occurs):

    atomic {
      int i =readIntFromConsole();
      Console.Write(i / 0);
    }

    Naturally the (i / 0) would throw an error, but you can't rollback a user-interaction request, without some more complicated memory-wiping device (illegal in the States, shucks). But in principle statements such as

    atomic {
      int i, j
      i = 4; j = 0;
      Console.WriteLine("Writing a line");
      i / j;
    }

    Could maybe be changed to catch the write line... did anyone say Linq?

    List< FutureFunc > atomicBlock = new List< FutureFunc >();
    try {
        int i, j;
        i = 4; j = 0;
        // this happens because we know Console.WriteLine() is a side-effects function
        atomicBlock.Add ( x  => Console.WriteLine("Writing a line");
        i / j;

        // commit changes:
        foreach( FutureFunc f in atomicBlock) {
           try {
             // this also assumes that f doesn't rely on certain things, or mutate structures that we later rely on in non-side-effecting functions
             f.Invoke();
           }catch(Exception e){
             // it's more difficult to work out what to do with
             // exceptions if they happen here:
             throw e;
           }
        }
    }catch(Exception e){
      // rethrow, or do something;
      throw e;
    }


    Generally a hard problem, methinks.
  • It would be great to see a C9 video on Spec# (I couldn't find one with a search, I appologize if one already exists). A compiler that could verify which functions had side-effects in your C# code and what they are could make C# more "honest." It seems to me that development of this project could solve some of the problems Erik mentioned in terms of unknown side-effects.

    Spec# Microsoft Research Project
    Great explination of Spec# by Matthew Podwysocki
  • Just to be pedantic: I used physics lingo for the commutator

    [AB] = AB - BA

    but in group theory this must, of course, be written

    ABab,

    where a = A inverse, b = B inverse, and "- BA" = (BA) inverse = ab

    because we don't have the "-" in a group. Just mixing my metaphors all over the place.
  • That is exactly how Time Warp works.

    Every object has a local virtual time (LVT) clock that says what time it is "now," virtually and locally, of course.

    Every method call generates an "activation record" in the operating system, which is realized as a "message" M from a Sender object S, to a Receiver object R, from a virtual SendTime (ST), to a virtual ReceiveTime (RT).

    Messages also have an algebraic sign, plus or minus. Messages are always created and destroyed in + / - pairs. Everything else in the messages of a matching pair is bit-wise identical (optimize via hashes). More on creation and destruction in a bit.

    For each object O, the operating system maintains an InputQueue IQ of incoming messages, ordered by ReceiveTimes RTs; an OutputQueue OQ of sent messages, ordered by SendTimes STs; and a StateQueue SQ of instance-variable blocks, ordered by the time "just-before-which" the state is needed to process an input message, denoted "t minus epsilon"

    The operating system schedules execution by the "lowest-virtual-time-first" discipline. When a method is entered at time T, user code "wakes up" with an input message with RT = T and a state with T-epsilon. When the method returns, the OS saves a state at time T for later method calls. When O makes a method call to receiver object R, OS generates a message pair for the call, saves the negative copy in O's OQ for later potential cancelation, and sends the positive copy to the receiver R.

    When the positive copy of M arrives at R, there are two cases: either R is scheduled to run at a time later than M.RT (that is, R is sitting in the OS's scheduling Queue at a time later than local virtual "now" for the OS), or R is scheduled to run at some time earlier than M.RT. In case 1, OS ROLLS BACK object R to time M.RT (that is, reschedules R to run at the earlier time M.RT), and OS cancels (perhaps lazily) ALL the negative messages in R's output queue by sending them to their receivers, where they potentially annihilate their matching messages and cascade the rollbacks, canceling all deleterious side effects!

    The whole thing works like magic and gets REAL speedup (with lazy cancellation) on big networks and huge distributed apps like discrete-event simulations. It's also used inside game physics libraries to do optimistic collision response. I'd love a good excuse to resurrect Time Warp Smiley

    Search "Time Warp Operating System" and "Theory of Virtual Time."

    There is some subtlety about multiple method calls at the same receive time, but just pick any deterministic sub-ordering and the system will work, I promise.
  • Wow, our papers are kind of hard-to-find now (but they're still great reads, I promise Smiley

    Jefferson D. R. 85, Virtual Time. ACM TOPLAS 7-3 pp. 404-425. Jefferson. D.
    R., Beckman B., et al. 87, The Time Warp. Operating System Operating Systems ...


    still searching will post more
  • http://www.merl.com/papers/docs/TR2000-17.pdf

    for optimistic collision response
  • CharlesCharles Welcome Change
    JAlexandrian wrote:
    It would be great to see a C9 video on Spec# (I couldn't find one with a search, I appologize if one already exists). A compiler that could verify which functions had side-effects in your C# code and what they are could make C# more "honest." It seems to me that development of this project could solve some of the problems Erik mentioned in terms of unknown side-effects.

    Spec# Microsoft Research Project
    Great explination of Spec# by Matthew Podwysocki


    spec# is very interesting. I will make this happen.

    Thanks for the suggestion.
    C
  • For some reason I can't click through any of these links, but this is a good list

    Time warp operating system

    This paper describes the Time Warp Operating System, under development for three
    ... Jefferson 85 David R. Jefferson, Virtual time, ACM Transactions on ...

    portal.acm.org/citation.cfm?id=37508

    The status of the time warp operating system

    The Time Warp Operating System (TWOS) is a special-purpose operating system ...
    Beckman, B., Jefferson, D., DiLoretto, M., Sturdevant, K., Hontalas, P., ...

    portal.acm.org/citation.cfm?id=62392&coll=portal&dl=ACM

    Parallel Simulation Using The Time Warp Operating System - Reiher ...

    Both are based on Jefferson s (Jefferson 1985) work on virtual time and use ...
    and the Time Warp Operating System (context) - Jefferson, Beckman et al. ...

    citeseer.ist.psu.edu/344314.html - 23k

    Providing Determinism In the Time Warp Operating System - Costs ...

    44 Distributed Simulation and the Time Warp Operating System (context) - Jefferson,
    Beckman et al. - 1987 21 Benchmarking the Time Warp Operating System ...

    citeseer.ist.psu.edu/67112.html - 23k

    15.3 Time Warp

    We designed the Time Warp Operating System (TWOS) to address this problem. ...
    project were David Jefferson, Peter Reiher, Brian Beckman, Frederick Wieland, ...

    www.netlib.org/utk/lsi/pcwLSI/text/node364.html - 10k

    Chancellor's Distinguished Fellows: David R. Jefferson (UC Irvine ...

    "Distributed Simulation and the Time Warp Operating System," with Beckman, Brian,
    et al., UCLA Computer Science Department 1987, CSE Series #870042. ...

    www.lib.uci.edu/online/fellows/jefferson.html - 16k

    DBLP: David R. Jefferson

    David R. Jefferson. List of publications from the DBLP Bibliography Server - FAQ
    ... R. Jefferson: Limitation of optimism in the time warp operating system. ...

    www.informatik.uni-trier.de/~ley/db/indices/a-tree/j/Jefferson:David_R=.html - 13k

    Parallel discrete event simulation on a T9000 transputer net ...

    Hontalas, B. Beckman, M. Di Loreto, L. Blume, ... Wieland, and D. Jefferson.
    Benchmarking the Time Warp Operating. System with ...

    ieeexplore.ieee.org/iel3/3070/8690/00383681.pdf
  • evildictaitorevildictait​or Devil's advocate
    brianbec wrote:

      ...



    You, sir, are a godsend. On the downside, I will get nothing done over the next three months as I'll be spending all my time writing code to do this in practise.


    I have to admit I've been looking at compiler theory including optimisations, autoconcurrency and proofs of correctness over the past couple of weeks, and it's pretty astounding how deep this rabbithole goes. It also means that when people say "how bad can a goto / unsafe block be?" I nearly die of a heart attack.

    One of the crazy notions I had was to label functions within the .NET framework (such as Console.WriteLine) as side-effecting monads and to label functions (such as Math.Sin) as constant with respect to argument functions in order to get some of these benefits out, but I had completely forgotten about you folks down in Cambridge. Seems you beat me to it.
  • Head.In.TheBoxHead.In.​TheBox Head In The Box

    The great programming languages singularity

    It is interesting to see the "effects" of this video and the interesting discussions it generates. In particular it is fascinating to see people sharpening their understanding of what functional programming entails.

    Just as I like to eat as wide an variety of food as possible (for instance I drink three different kinds of tea at a time), I like to program in and learn about as many different languages as possible. For a while I was literelly collecting every language definition I could lay my hands on. So, I am not taking any sides in this context, and neither am I reflecting the official Microsoft position on this topic. I am making some personal observations, pointing out the fundamental assumptions behind each possible interpretation of what functional programming means, thereby hopefully offering Channel 9 viewers the opportunity to learn and increase their understanding of the field of programming languages and type systems. 

    According to Graham Hutton (http://www.cs.nott.ac.uk/~gmh/book.html) functional programming is defined as: Broadly speaking, functional programming is a style of programming in which the basic method of computation is the application of functions to arguments. There are at least two ways to interpret this broad definition. First of all you can consider functional programming as a particular style of programming that values compositionality. The second way is to concentrate on the mathematical definition of function, which implies purity.  In order to bring clarity is often good to look at both sides of the coin and just like in coding, to really understand something you need to explore the edges of an idea. This is what Seth Godin (http://sethgodin.typepad.com/)%20calls">http://sethgodin.typepad.com/) calls looking for the "purple cow", or the "free prize inside". I just love coins, both heads and tails.

    The Haskell interpretation of functional programming is programming with pure mathematical functions. The free prize inside Haskell is the observation that you can deal with effects by making all effects explicit in the type system and distinguishing pure values from side-effecting computations of values via monads. These computations are themselves values that can be combined using pure functions, which allows programmers to define their own "control structures". This is why Simon Peyton Jones calls Haskell the world's best imperative language (http://research.microsoft.com/~simonpj/papers/marktoberdorf/). Another way of looking at this is that Haskell enables Landin's "the next 700 programming languages" http://www.cs.utah.edu/~eeide/compilers/old/papers/p157-landin.pdf where you define a domain specific imperative sublanguage of effects (a monad plus non-proper morphisms) and glue together effectful computations using pure functions.

    In Haskell, the type-system keeps track of effects. However, as anyone that has programmed in Haskell has experienced, this purity comes at a high price. As soon as one sub-expression has an effect, every expression that depends on that expression has a potential effect as well, and the monadness percolates all the way up.

    The underlying glue language in Haskell is based on three principles Angel abstraction, (b) parameterization, and (c) correspondence (http://people.cis.ksu.edu/~schmidt/text/info.html), which brings us to the functional programming style. In normal words adhering to the functional programming style means allowing programmers to create closures of everything.

    The purple cow in languages that use the functional programming style without insisting on absolute purity such as Lisp, Scheme, ML, OCaml, F#, Erlang, Scala,  Algol68, Python, Ruby, Smalltalk, C#, Visual Basic, Java, .. is that the compositional style allows for very concise programs, even in the presence of effects. As a "terminology purist", I like to refer to languages that embrace the functional style as “closure oriented” programming. Note that it is no coincidence that there is really just one major pure functional language and many more pragmatic closure-oriented languages.

    This pragmatism is invaluable in practice. It is much easier to deal with reality if you are pragmatic. For example, if you write a compiler in F#, you can generate fresh identifiers and still use a nice functional style whereas in Haskell the fact that generating a fresh name is a side effect, you are forced to switch to a monadic style forcing the programming to explicitly order all effects. One of the problems is that Haskell lumps all effects together, so it is not possible to distinguish between "benign" effects that are commutative or idempotent, …  and real "bad" effects.

    While closure-oriented languages make certain effects implicit (such as evaluation, state, exception, threads, events) that leaves many other effects that need to be dealt with explicitly, for example non-determinism (the IEnumerable monad).  So we increasingly see that languages that use the functional programming style also factor out certain effects explicitly and provide a monadic sublanguage for creating effectful computations. Query comprehensions in C# and VB directly correspond to monad comprehensions in Haskell and are applicable way beyond just queries.

    F# in particular is a noteworthy language since in F# this trend of making effects explicit is used extremely effectively (pun intended) and elegantly. For example it is advantageous to make events explicit (http://blogs.msdn.com/dsyme/archive/2006/03/24/559582.aspx), or asynchronous workflows (http://blogs.msdn.com/dsyme/archive/2007/10/11/introducing-f-asynchronous-workflows.aspx), or computation expressions (http://blogs.msdn.com/dsyme/archive/2007/09/22/some-details-on-f-computation-expressions-aka-monadic-or-workflow-syntax.aspx), or query expressions (http://research.microsoft.com/fsharp/manual/lexyacc.aspx#Comprehensions). As I have said before, I am especially excited and proud that Microsoft has decided to productize F# (http://www.apress.com/book/view/1590598504).

    What is really great to see is that the pragmatic languages were influenced by Haskell, but created much more powerful comprehensions than Haskell to deal with real world problems, which in turn has influenced Haskell to incorporate some of these innovations back into Haskell l(http://homepages.inf.ed.ac.uk/wadler/topics/links.html#list-comp). This is a truly beautiful programming language singularity (http://flakenstein.net/lib/flake-singularity.pdf).

    These are great times for language designers and programmers alike!

    Erik Meijer aka Head In The Box

  • evildictaitorevildictait​or Devil's advocate
    It's pretty amazing how much you can prove about a program in a high level language such as C# or Haskell, given that fundamentally it compiles (or runs as) assembly code, which is in principle difficult to reason about.
  • Allan LindqvistaL_ Kinect ftw

    cool Smiley erik is awsome as usual Smiley

    i had hoped to hear a bit more about the good reasons why one dont write windows programs in haskell though.. thats the one thing that i think is making it hard for me to understand fp. in general it seems that fp is like the philosophers stone, once you master it you can do anything and not only that, it will have free scalability and will be error proof to boot Smiley

    clearly this is not so and i dont think even the most hardcore fp buffs really think that, but im just trying to describe how it feels to me as a simple c#/java programmer Smiley

    so what are the reasons according to the masters why, say haskell is not used for general purpose programing? charles i know you asked the question but i think that could be a topic for a whole interview

    again, im not trying to debunk fp or question its usefullness, but i think a more well rounded discussion where both the pros and cons get equal time whould be great Smiley to get discouraged about fp just because one tried to apply it in the wrong area whould be a great shame and i think part of the reason why im yet to understand monads fully (besides beeing slightly daft;P) is because i have been trying to apply it to the wrong real world c# problems..

    i'd really really like to see more interviews that point out the cases where we (as in general purpose programmers) already use monads today without realizing it. from what i understand from blogs/other c9 vids, that happens quite alot, ive just not connected the dots yet Sad

    also id like to see more of the monads-are-sort-of-generic-classes type examples Smiley (are they? if not, what are the diffrances/conditions?)

    thanks again charles and erik Smiley

  • aL_ wrote:
    

    cool erik is awsome as usual

    i had hoped to hear a bit more about the good reasons why one dont write windows programs in haskell though.. thats the one thing that i think is making it hard for me to understand fp. in general it seems that fp is like the philosophers stone, once you master it you can do anything and not only that, it will have free scalability and will be error proof to boot

    clearly this is not so and i dont think even the most hardcore fp buffs really think that, but im just trying to describe how it feels to me as a simple c#/java programmer

    so what are the reasons according to the masters why, say haskell is not used for general purpose programing? charles i know you asked the question but i think that could be a topic for a whole interview

    again, im not trying to debunk fp or question its usefullness, but i think a more well rounded discussion where both the pros and cons get equal time whould be great to get discouraged about fp just because one tried to apply it in the wrong area whould be a great shame and i think part of the reason why im yet to understand monads fully (besides beeing slightly daft;P) is because i have been trying to apply it to the wrong real world c# problems..

    i'd really really like to see more interviews that point out the cases where we (as in general purpose programmers) already use monads today without realizing it. from what i understand from blogs/other c9 vids, that happens quite alot, ive just not connected the dots yet

    also id like to see more of the monads-are-sort-of-generic-classes type examples (are they? if not, what are the diffrances/conditions?)

    thanks again charles and erik



    FP remains arcane and unpopular for a bunch of reasons: historical and pedagogical, mostly. The grandfather of FP, LISP, got a bad reputation for performance in the early days, and suffered further guilt-by-association with the failures of AI in the 80's. This guilt by association was completely undeserved, as the failures of AI had little to do with choice of programming language (I know of some underperforming AI programs written in FORTRAN).

    FORTRAN and C early-on got the reputation as being THE choice languages for the discriminating programmer who had REAL problems to solve and needed to squeeze performance out of the machines. Many miracles in science and engineering were enabled because of number-crunching FORTRAN programs and operating systems written in C (yes, Windows is a miracle of engineering, despite its warts and wrinkles).

    Can you name a famous program or system written LISP, ML (any dialect), Scheme, Haskell, etc.? There are some great programs out there, but not any famous ones, and new programmers are attracted by fame and naturally gravitate to the languages of the famous programs. (BTW, there is a really famous LISP program: it's the scripting language in AutoCAD)

    Finally, for people who are not fully trained in mathematics or physics, FP feels weird and hard to learn, especially if they have lots of programming experience in imperative languages. To a mathematician or physicist, FP is the most natural thing in the world. We all get this "Oh, where has THIS been all my life" feeling when we encounter it. But to the business or systems programmer with no exposure even to LISP, it feels "inside-out." And it should, because FP starts with the math and SUBTRACTS abstraction where necessary to get to the machine. Imperative programming starts with the machine and ADDS abstraction up to the math.

    The situation is just like that of colors: there are additive colors (red, green, and blue) and subtractive colors (cyan, magenta, yellow). It's possible to make up any picture by composing each kind within its own system, but it's difficult to mix the two.

    The future of FP is very bright. It's more than possible, it's plain EASIER to write real-world, high-performance apps and systems code in FP, because you don't have to keep reinventing over and over and over again all the math you need to make the programs small, beautiful, and RIGHT. Programmers are becoming more mathematical by training, and they've already seen some of the ideas of FP in school by the time they hit the streets. And, finally, it won't be long before there are some famous programs on the street and FP will capture some of the glitter and spotlight. There are already some miraculous F# programs (F# is not Haskell, but you get the drift)
  • Allan LindqvistaL_ Kinect ftw
    brianbec wrote:


    FP remains arcane and unpopular for a bunch of reasons: historical and pedagogical, mostly. The grandfather of FP, LISP, got a bad reputation for performance in the early days, and suffered further guilt-by-association with the failures of AI in the 80's. This guilt by association was completely undeserved, as the failures of AI had little to do with choice of programming language (I know of some underperforming AI programs written in FORTRAN).

    FORTRAN and C early-on got the reputation as being THE choice languages for the discriminating programmer who had REAL problems to solve and needed to squeeze performance out of the machines. Many miracles in science and engineering were enabled because of number-crunching FORTRAN programs and operating systems written in C (yes, Windows is a miracle of engineering, despite its warts and wrinkles).

    Can you name a famous program or system written LISP, ML (any dialect), Scheme, Haskell, etc.? There are some great programs out there, but not any famous ones, and new programmers are attracted by fame and naturally gravitate to the languages of the famous programs. (BTW, there is a really famous LISP program: it's the scripting language in AutoCAD)

    Finally, for people who are not fully trained in mathematics or physics, FP feels weird and hard to learn, especially if they have lots of programming experience in imperative languages. To a mathematician or physicist, FP is the most natural thing in the world. We all get this "Oh, where has THIS been all my life" feeling when we encounter it. But to the business or systems programmer with no exposure even to LISP, it feels "inside-out." And it should, because FP starts with the math and SUBTRACTS abstraction where necessary to get to the machine. Imperative programming starts with the machine and ADDS abstraction up to the math.

    The situation is just like that of colors: there are additive colors (red, green, and blue) and subtractive colors (cyan, magenta, yellow). It's possible to make up any picture by composing each kind within its own system, but it's difficult to mix the two.

    The future of FP is very bright. It's more than possible, it's plain EASIER to write real-world, high-performance apps and systems code in FP, because you don't have to keep reinventing over and over and over again all the math you need to make the programs small, beautiful, and RIGHT. Programmers are becoming more mathematical by training, and they've already seen some of the ideas of FP in school by the time they hit the streets. And, finally, it won't be long before there are some famous programs on the street and FP will capture some of the glitter and spotlight. There are already some miraculous F# programs (F# is not Haskell, but you get the drift)


    thanks for the reply Smiley
    to awnser your question, no i can tof the top of my head name a popular program written in the languages you mention, but then again, i usually dont know what language a program is written in Smiley (i can guess its c++/c# or java but i dont really know)

    again, im not trying to imply that FP is bad or something but surely there must be areas where its less useful than say imperative techniques? i think those areas are as important to highlight as the ones where FP is very useful such as concurrency. knowing the pros and the cons will make it easier for noobs such as me to understand fp, because we wont try and apply it in unneccesarily difficult situations. Smiley

    as a side note, again from a semi noob in both areas, i find it sligtly odd that both dynamic languages and FP languages are haveing a second coming at the same time. they seem to be at opposing ends of the spectrum somehow.

    dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
    but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse  than say  c#?:O

    and it goes both ways, if youre a haskell guy and you think c#/java is bad because of all the potential side effects, whouldnt say, ruby be even worse? in ruby one dont even know what type you get back from a function, much less if it has a side effect.

    and yet, both these paradigms are on the rise.. am i missing something? Perplexed whats a simple programmer to do Smiley
  • evildictaitorevildictait​or Devil's advocate
    aL_ wrote:
    
    again, im not trying to imply that FP is bad or something but surely there must be areas where its less useful than say imperative techniques? i think those areas are as important to highlight as the ones where FP is very useful such as concurrency. knowing the pros and the cons will make it easier for noobs such as me to understand fp, because we wont try and apply it in unneccesarily difficult situations.


    Yes. It's worth pointing out that there are a number of areas that for the foreseeable future functional programming techniques won't be playing a part.

    Basically as Brian said, languages like C and Fortran start with the machine and add from there, whereas languages like Lisp and Haskell start with the math and subtract from there, so in applications that are very close to the machine your languages like C and C++ are going to remain more popular for a reasonably long time. Examples include the low level operating system, device drivers and the 3D rendering agent inside 3D programs (e.g. games).

    This being said, most tasks are problem-orientated, rather than machine-orientated, and it is these types of problems that are often best solved in languages like Haskell and Lisp. Where problem-oreientated programs turn into rich client applications, languages like F# bridge the final gap between solving the problem and allowing the user to use it.

    aL_ wrote:

    as a side note, again from a semi noob in both areas, i find it sligtly odd that both dynamic languages and FP languages are haveing a second coming at the same time. they seem to be at opposing ends of the spectrum somehow.

    dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
    but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse  than say  c#?



    For someone like me who's worked quite a lot with C and C++, I know first hand the problems that can happen when you pass something around which isn't what you thought it was, usually crashing the application at point-of-use. The type system in C#, Haskell etc counteracts this to a large extent - instead of crashing the application at point of use, it stops the compiler at compile-time, which is nearly always a better idea. If you have 100 errors in your program, wouldn't you prefer to see them before you ship your application, rather than only catching 99 of them in test?

    Languages such as PHP and other essentially type-less languages help you to make errors by their lack of type checking. Consider the code:

    echo "Twelve is " +  ("1" + "2");
    echo "Twelve is ". ("1" . "2" );

    Note that in PHP the dot '.' is the string-concatenator, and + is a mathematical operator. In the first example (which happens remarkably often), I've accidentally used plus instead of dot. PHP doesn't mind however, since it can convert a string to an integer silently. "Twelve is" doesn't look like an integer, so becomes the value 0, "1" and "2" become 1 and 2 respectively and although echo expects a string, and (0 + 1 + 2) is an integer, it doesn't matter because PHP can convert it straight back into the string "3"!.

    If this little code snippet occurred hidden away somewhere, and I didn't test for it, I might never notice that I've used this incorrectly. Haskell would object to my using + with strings and echo with ints respectively, and would never have run the first time.

    Ultimately this means that type-less or mostly type-less languages are great for scripting small programs, but when it comes to proper applications of more than a couple of hundred lines, it becomes incredibly important to have strong type checking, to reduce the strain of testing, and to catch as many of the obvious little errors before you ship your application to users.

    aL_ wrote:

    and it goes both ways, if youre a haskell guy and you think c#/java is bad because of all the potential side effects, whouldnt say, ruby be even worse? in ruby one dont even know what type you get back from a function, much less if it has a side effect.

    and yet, both these paradigms are on the rise.. am i missing something?  whats a simple programmer to do


    True, although you have to be careful with Ruby. It's actually strongly typed behind-the-scenes, which helps catch a number of errors, even if you don't force the types at compile time - it can usually infer them from context.


    Basically, the benefits in FP and strong static typing come through most promintently in large, real world apps. They catch silly mistakes and abstract things like concurrency so the programmer doesn't have to think about it.

    The languages where strong static typing are not commonly used are nearly invariably designed to be used as small, quick scripts, such as Bash, or on webpages such as PHP and Javascript. One could argue that these uses are sufficiently small that the benefits of FP and static typing don't come through, (although I would disagree).

    In all real world applications there are errors (yes, even in Microsoft programs), however the number of these can be reduced by extensive testing (expensive) or by automatic detection, which is what static typing is designed to do. By enfocing this by using strong static typing, you guarrantee that errors are errors of logic or input, rather than silly errors such as forgetting what parameters a function takes, using the wrong operator etc.
  • aL_ wrote:
    

    and yet, both these paradigms are on the rise.. am i missing something?  whats a simple programmer to do



    Here's the doctor's prescription for you:

    In your daily work, use C#, VB, and F#. 

    Get, read, and work EVERY problem in Structure and Interpretation of Computer Programs, by Abelson and Sussman. (Use PLT Scheme to do the problems).

    When done with SICP, get, read, and work EVERY problem in Expert F# by Don Syme, et al. Use F#, obviously

    When done with that, come back for a Haskell prescription. The stuff above should take you three years.
  • stevo_stevo_ Human after all

    It took me a while to understand what was being expressed in the video.. but still I think I managed to miss the usage concepts..

    Ok, so we have functions (methods?) that are more transparent about how they work, they provide an overview how the return data is aquired, I don't think you said anything about if exceptions are noted here (like with; as was noted, java)..

    So ok, great, we have a system where everything is 'honest' about it's workings, why is that useful?

  • CharlesCharles Welcome Change
    stevo_ wrote:
    

    It took me a while to understand what was being expressed in the video.. but still I think I managed to miss the usage concepts..

    Ok, so we have functions (methods?) that are more transparent about how they work, they provide an overview how the return data is aquired, I don't think you said anything about if exceptions are noted here (like with; as was noted, java)..

    So ok, great, we have a system where everything is 'honest' about it's workings, why is that useful?



    I'm glad you put the term honest in quotes. I don't think it's the precise way to express what Erik is talking about (then again, I'm not precisely qualified to debate Erik - he's a programming language expert. I'm not.). Perhaps a clearer (and less contentious) way to think about this is in terms of surpise-capable(unexpected side effectual) versus surprise-free (a priori side-effectual or side effect free (also known as 'useless')) functions. So, if you call a 'pure' function then you can expect, and rely upon up front(prior to execution of said function - a priori...), no invokations of Surprise that could end up breaking your algorithm or befuddling your critical operation because, well, that's what Surprises are all about - you are not expecting them and therefore you have no way to handle them since you didn't write code that expects them (and code that calls your code is in the same boat so the Surprise bubbles up through the system surprising callers, which is not very composable, right?)

    Does this make sense?

    C
  • Charles wrote:
    

    I'm glad you put the term honest in quotes. I don't think it's the precise way to express what Erik is talking about (then again, I'm not precisely qualified to debate Erik - he's a programming language expert. I'm not.). Perhaps a clearer (and less contentious) way to think about the this is in terms of surpise-capable(unexpected side effectual) versus surprise-free (a priori side-effectual or side effect free (also known as 'useless')) functions. So, if you call a 'pure' function then you can expect, and rely upon up front(prior to execution of said function - a priori...), no invokations of Surprise that could end up breaking your algorithm or befuddling your critical operation because, well, that what Surprises are - you are not expecting them and therefore you have no way to handle them since you didn't write code that expects them (and code that calls your code is in the same boat so the Surprise bubbles up through the system surprising callers, which is not very composable, right?

    Does this make sense?

    C



    Charles speaks the truth Smiley

    I like to focus on the type system: if the type system cannot tell you the difference between a side-effecting function and a non-side-effecting function, then the type system isn't telling you "the whole truth." That is, it's not telling you everything about your program that it is possible, mathematically, to say.

    How about this:

    side-effecting function = useful function
    non-side-effecting function = useless function

    type system that can always tell the difference = wholly truthful
    type system that can't always tell the difference = partially truthful

    monadic function = useful function, helps a partially truthful type system to tell the whole truth.
  • aL_ wrote:


    dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
    but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse  than say  c#?
    
     


    I think there is a lot of misunderstanding "in the air" about "dynamic typing." I think it's often confused with "weak typing." Scheme is a great example of a "dynamically typed" language that is also (mostly) "strongly typed."

    "Dynamically typed" just means that types are carried around with data and are checked at run time, not at compile time.
     
    "Strongly typed" just means that there are not a lot of implicit conversions going on. If you call a function that expects a double and you pass it a string, you will get an error. In a statically typed, strongly typed language, you will get the error at compile time. In a dynamically typed, strongly typed language, you will get the error at run time.

    Math.Sin("32.6") --> compile-time error if statically typed, strongly typed.

    Math.Sin("32.6") --> run-time error if dynamically typed, strongly typed.

    "Weakly typed" just means the compiler and run time will do lots of implicit conversions for you, so, it will convert your string to a double or your double to a string or what not. You may still get an error at run time, but the conversion routine will throw it rather than the ultimate function you're trying to call:

    Math.Sin("32.6") --> Math.Sin(StringToDouble("32.6")), rewritten by compiler if weakly typed, either static or dynamic.

    Math.Sin("ABC") --> Math.Sin(StringToDouble("ABC")), run-time error if weakly typed, static or dynamic (though some super dynamic languages might do the following (sketching):

    Math.Sin("ABC") -->
    Math.Sin (
        try { StringToDouble("ABC"); }
        except { try { AnyToDouble(LookupVariable("ABC")) } } )

    I think Javascript will do that.



  • RichardRudekRichardRudek So what do you expect for nothin'... :P
    stevo_ wrote:
    So ok, great, we have a system where everything is 'honest' about it's workings, why is that useful?


    It's useful because it not only allow more robust systems, but also allows various other "optimisations".

    This idea of (bad) side-effect free permeates most of modern day computing. For example, at the macro-level, we have Operating Systems with protected memory systems, privilege levels, etc, which then allows robust multi-tasking, etc.

    Moving to the micro-level, we have instruction-level parallelism, which CPU designers (at the chip, transistors level) needed in order to improve throughput [1]. Things like super-scalar, pipelining, out-of-order execution, etc, I believe, can all trace their roots back to this idea of (bad) side-effect free operations. Sure, they had to introduce a slew peripheral hardware to cache or hide the various (good and bad) side-effects, but they did it.

    Now were moving into the age where they are trying (again) to do it at the programmer level. At least, that's my take on it... Wink


    [1] The root causes for this are many. But from my perspective, the primary cause is the unwillingness of the 'industry' to move to vastly faster memory sub-systems - orders of magnitude, in some cases. (And yes, it would be extremely expensive, which is why it's not being done, considering ->). So far, they've been able to hide the huge differences between the the throughput achieveable from the various 'levels' in  the memory heirachy: register to register, register to L1 cache, register to L2 cache, etc.
  • William Staceystaceyw Before C# there was darkness...
    brianbec wrote:
    
    aL_ wrote:


    dynamic languages are, according to to those who know, great because one is not "restricted" by the type system (like the one in java/c#) and can do anything.
    but say haskell, (as i understand it) has a whole lot stronger type system, so for a dynamic guy, shouldnt that be even worse  than say  c#?
    
     


    I think there is a lot of misunderstanding "in the air" about "dynamic typing." I think it's often confused with "weak typing." Scheme is a great example of a "dynamically typed" language that is also (mostly) "strongly typed."

    "Dynamically typed" just means that types are carried around with data and are checked at run time, not at compile time.
     
    "Strongly typed" just means that there are not a lot of implicit conversions going on. If you call a function that expects a double and you pass it a string, you will get an error. In a statically typed, strongly typed language, you will get the error at compile time. In a dynamically typed, strongly typed language, you will get the error at run time.

    Math.Sin("32.6") --> compile-time error if statically typed, strongly typed.

    Math.Sin("32.6") --> run-time error if dynamically typed, strongly typed.

    "Weakly typed" just means the compiler and run time will do lots of implicit conversions for you, so, it will convert your string to a double or your double to a string or what not. You may still get an error at run time, but the conversion routine will throw it rather than the ultimate function you're trying to call:

    Math.Sin("32.6") --> Math.Sin(StringToDouble("32.6")), rewritten by compiler if weakly typed, either static or dynamic.

    Math.Sin("ABC") --> Math.Sin(StringToDouble("ABC")), run-time error if weakly typed, static or dynamic (though some super dynamic languages might do the following (sketching):

    Math.Sin("ABC") -->
    Math.Sin (
        try { StringToDouble("ABC"); }
        except { try { AnyToDouble(LookupVariable("ABC")) } } )

    I think Javascript will do that.



    It would be nice if we could:

    DateTime dt = "1/1/08";
    int i = textbox1.Text

    Naturally, we are saving the manual Convert which will throw anyway if the parsing fails so don't think this change would hurt.

    At a minimum, having Operator Extention methods would allow our programs to opt-in to this.
  • William Staceystaceyw Before C# there was darkness...
    Thanks Erik.  Helped me understand more about FP and what you and BrianB have been talking about in last few videos.  I think this was the first time I heard clearly that "state" is needed and not bad.  We "state" folks are not all in the weeds rubbing our gold rings.  At the same time I am more confused (which is good): 

    1) If we had "public void IO GetSum(1,2,3) {...}", does this really help?  In a typical windows app, most functions will now have IO tacked on.  But does that make my app better/easier to understand?  Moreover, I can still lie.  I can say func Throws Exception1, but still throw Exception2.  Or would compiler prevent such madness from me?  I am a baby on FP, but I can find precious little functions that would not need IO in my current WinForms app (btw, using Linq and local DB for Disconnected operation - Thanks much!). 

    2) If I declare all possible exceptions that a function "may" throw; a moderately complex function may have reams of ThrowsException attributes.  By calling .Net libs which may call other libs, which may call Win32Apis, you could end up with all Exceptions in the world under that call tree.  Not to mention some Exceptions we can't know (i.e. OutOfMemory, ThreadInterupt, ThreadAbort) in advance.  So we are kinda forced to lie with the unperfect information we have and external forces.  How would this be handled, or did I miss the idea (which is probable).

    3) I heard state was good and FP was good, and started to see some light on "IO".  But I did not "gleen" the solution path from the video.  Do you see a clear path opening here?

    4) It seems like Windows message pumb was ahead of its' time!  Single threaded with a Message Queue (rock on Erlang).  Even with that "good" design, we still have challenges in that area (i.e. invokes, dead locking ourselfs, live locks, etc)  As you say, given just a sliver of daylight, we can/will hang proverbial self.

    William

  • Allan LindqvistaL_ Kinect ftw
    evildictaitor wrote:
    
    If you have 100 errors in your program, wouldn't you prefer to see them before you ship your application, rather than only catching 99 of them in test?


    I whould prefer to find errors at complie time. but im a static typing kind of guy Smiley i totaly agree that its better to find errors at compile time and not run time. but alot of people prefer languages without and static checks at all. are they "wrong"? or are we (the static guys) "wrong"? perhaps there is something inbetween? i just think thats an interesting discussion.. (and one that can lead to infinite flame wars im sure Tongue Out)

    evildictaitor wrote:
    
    True, although you have to be careful with Ruby. It's actually strongly typed behind-the-scenes, which helps catch a number of errors, even if you don't force the types at compile time - it can usually infer them from context.


    alright Smiley i dont know very much about the internals of ruby. but meant general dynamicly(i.e not staticaly) typed languages.

    evildictaitor wrote:
    
    One could argue that these uses are sufficiently small that the benefits of FP and static typing don't come through, (although I would disagree).


    i do too. i vastly prefer static typing to dynamic.

    but my point was that a lot of people feel that dynamic typing is the way to go and looking at the jaoo vids for example one get the feeling that more dynamically typed languages are coming back(as well functional languages).
    the DLR, iron python and iron ruby are hot stuff right now too Smiley for general computing too it seems if one listens to some of the guys at jaoo:)

    im not dissing FP or static typing at all, i'd pick static typing over dynamic any day of the week. i just think its interesting that both types of programing "are the future" at the same time Smiley it'd be interesting to have an interview with the masters (brian erik and anders among others) to talk about that.. or perhaps to hear some dynamic typing buffs view on say haskell and f# Smiley

    but yeah, i share your feeling that dynamicly typed languages are best suited for small programs/scripts and staticly typed languages are better for larger ones
  • Allan LindqvistaL_ Kinect ftw
    brianbec wrote:
    
    Here's the doctor's prescription for you:
    In your daily work, use C#, VB, and F#. 
    Get, read, and work EVERY problem in Structure and Interpretation of Computer Programs, by Abelson and Sussman. (Use PLT Scheme to do the problems).
    When done with SICP, get, read, and work EVERY problem in Expert F# by Don Syme, et al. Use F#, obviously
    When done with that, come back for a Haskell prescription. The stuff above should take you three years.


    so no python, ruby, smalltalk or indeed c then? Smiley
  • Allan LindqvistaL_ Kinect ftw
    brianbec wrote:
    
    I think there is a lot of misunderstanding "in the air" about "dynamic typing." I think it's often confused with "weak typing."


    true, true. i do know the diffrence between static/dynamic typing and strong/weak typing (belive it or not Tongue Out) even though i slipped up and use the wrong trem in my post. my bad Smiley

    what i meant was that for a haskell programmer, c# seems to have weak typing because its not visible in the types if there are any side effects.. but in like (as you mention)javascript, vb(not vb.net) and possebly c/c++, the types are even weaker, there you dont even know what you get back. true? (this is meant as an observation)

    after some consideration it seems to me that FP and dynamic typing dont really conflict with one and other. i whould simply get "cant implicitly convert IO<int> to int" errors at runtime and not at comiple time. right?

    but isnt it sort of the point of functional programing to prove the correctness of a program by using a static analysis? (im sure the awnser is "sort of" but i hope you get my point Smiley) doesnt use of dynamic typing sort of negate that?

    that whould be make very interesting interview Smiley

    oh and btw, is it really impossible to be dishonest in haskell (or f# for that matter)?
    cant you do something like:

    int DoSeeminglyPureStuff(int input){
    //disregarding the input int and doing evil things instead
    IO<int> myIoInt = sourceOfInpureInt();
    return myIoInt.Value; // i.e. the "real/pure" int
    }

    (sorry i dont know haskell syntax)
    but perhaps the IO monad doesnt have something like "Value" to get the value inside the monad? or is there something else stopping you from doing something like this?
  • aL_ wrote:
    

    but isnt it sort of the point of functional programing to prove the correctness of a program by using a static analysis? (im sure the awnser is "sort of" but i hope you get my point Smiley) doesnt use of dynamic typing sort of negate that?


    My totally un-diplomatic answer is: Yes, it does, which is why dynamically typed languages will prove to be a dead end for anything but the simplest of shell scripting.
    The benefits of dynamically typed languages usually have nothing to do with the lack of static type checking (easy access to data structures, high level expression-based syntax, etc.). And almost all of those benefits can be had in statically typed languages too.

    Programs are getting more complex, not less, and anything that can go wrong, will eventually go wrong (Murphy's law). So the more potential crashes that the compiler can rule out entirely with 100% guarantees, the better.
    We still need testing, but testing is a poor substitute for proof (especially when those proofs can be had for a price as cheap as just running the type checker).

    Furthermore, types aren't a burden, they're a big help. Do people complain about the burden of drawing their OOP classes out in UML before starting to program? Types give an overal shape of a program (especially in FP - UML may be better when everything is based on state).

    We need more static checking, not less.

    </rant>

    aL_ wrote:
    
    oh and btw, is it really impossible to be dishonest in haskell (or f# for that matter)?
    cant you do something like:

    int DoSeeminglyPureStuff(int input){
    //disregarding the input int and doing evil things instead
    IO<int> myIoInt = sourceOfInpureInt();
    return myIoInt.Value; // i.e. the "real/pure" int
    }

    (sorry i dont know haskell syntax)
    but perhaps the IO monad doesnt have something like "Value" to get the value inside the monad? or is there something else stopping you from doing something like this?


    Exactly, the IO monad simply doesn't have a way to get out of it*. It's "sticky" and infects anything it touches.
    However there are other monads that do have functions that can "run" a monadic value. For example the ST monad can work with mutable variables (only, no networking, or disk writes etc.), and once you've written your algorithm that requires mutable state, you pass it to "runST" which will give you back a pure value.
    The STM monad can deal with concurrent acccess to shared mutable state (only), and once you've written an STM action (a "transaction") you pass it to "atomically" which will run it atomically, and return... an IO action! So you can get from ST to pure, and from STM to IO, and you can get from pure to any monad via the "return" function.

    This gives you a nicely "layered" program, where you have this thin layer of IO at the bottom that does your IO, and then some STM on top of that which handles your concurrency synchronisations, then the bulk of your program is pure, but some of those pure calls may use the ST monad for local encapsulated mutable state.


    * actually there's the unsafePerformIO which let's you escape when you really need it, and when you really know what you're doing. If you do use this, just must absolutely guarantee that the thing you're wrapping actually does have pure semantics (and even then things can go wrong - so you really need to know what you're doing!).
  • Hi.

    A question for the monad experts (I don't know alot about monads but I want to learn):

    How are monads combined, and what would that look like in C#?

    Say, for instance, that I have an IO layer over a file system or a database that returns the IO monad, and I want to combine that with the maybe monad. Do I create a new monad that combines them, or do I wrap the Maybe monad on top of the IO monad?

    /Erik
  • If you are new to functional programming you might be interested in this tutorial:
    http://www.cs.utah.edu/~hal/docs/daume02yaht.pdf
    I found it very useful and interesting.
  • Erik_H wrote:
    
    Say, for instance, that I have an IO layer over a file system or a database that returns the IO monad, and I want to combine that with the maybe monad. Do I create a new monad that combines them, or do I wrap the Maybe monad on top of the IO monad?


    I'm not sure I can convert all of this into pseudo C# so I'll just use Haskell. Shouldn't be too hard to follow.

    In Haskell you have something called "Monad transformers". It's basically a monad which takes an "inner monad", and let's you access that inner monad using the function "lift". The convention is that the monad transformer's types end with "T".
    Sadly there is no MaybeT, but it can easily be defined ( http://www.haskell.org/haskellwiki/New_monads/MaybeT ).
    I'll use the ErrorT transformer instead.

    So a transformer for "errors" is called "ErrorT". It basically takes an error type, and an "inner monad", and allows you to call "throwError" to signal an error, and if anything signals an error the whole thing will signal an error.


    Anyway, here's the Haskell type when the error type is a string "like an error message).

    type ErrorIO a = ErrorT String IO a

    This just declares a type synonym (for convenience) called ErrorIO with one type parameter, and says that it's equal to the error monad with an error of type String, and an IO as the inner monad. These  transformers nest, so you can wrap another transformer around the ErrorIO monad to stack even more features on.


    And an example usage:

    test :: Bool -> ErrorIO Int
    test fail = do {
        lift $ putStrLn  "Hello!";
        when fail $ throwError "Error!!!";
        return 5;
    }


    Notice the "lift" in there which lets you get at the inner monad (IO in this case, which we use to print "Hello!" to he terminal).
    Normally when you do this you'd define shorthands for all the methods you want to use from the various inner monads so you don't have to use "lift" everywhere (which can get messy when you want to access a monad that's several layers deep since you have to apply "lift" several times). Or if you want easy access to any monadic actions in the inner monads you would define shorthand "lift" functions for each of the inner monads so you at least only have to use a single lift function ("liftIO", "liftState", etc.).

    And here's some exploring in GHCi (interactive GHC), which "runs" the monadic value returns by the function test. This simply "unwraps" the transformer and returns the inner monadic value (if there were no errors):

    *Main> runErrorT ( test True )
    Hello!
    Left "Error!!!"
    *Main> runErrorT ( test False )
    Hello!
    Right 5


    This just tries the "test" function in the two cases, the first one returns "Left" which signals that the ErrorT monad failed, and the error message is "Error!!!". The second one returns "Right" (memory rule: "Right means Correct") and the value that came out of the inner monad.

    *Main> :type runErrorT ( test False )
    runErrorT ( test False ) :: IO (Either String Int)


    This just shows the type inferred for the return value when running the outer error monad. As you can see it unwraps the outer monad, and returns an action of the type of the inner monad, that returns "Either" the error type (String) or the value type (Int).

    *Main> runErrorT ( test True `catchError` (\err -> do {lift $ print "error!"; return 0; }))
    Hello!
    "error!"
    Right 0


    This uses the "catchError" function to catch an error, which just prints the message "error!" on to the screen, and then returns 0 (without errors), and as you can see we get "Right 0" at the end. The stuff to the right of catchError is a lambda expression which takes the error, and returns a monadic expression that handles it (which may throw a new error).


  • Allan LindqvistaL_ Kinect ftw
    sylvan wrote:
    
    My totally un-diplomatic answer is: Yes, it does, which is why dynamically typed languages will prove to be a dead end for anything but the simplest of shell scripting.
    ...
    We need more static checking, not less.


    Thats exactly my feeling as well.. in college we had a course in smalltalk and our professor happily exclaimed that "its a great language, you can change anything! you can change the way everything from how strings behave  to the core of the runtime behaves" and i was like.. what? how can you possibly test that? or how could you possebly run anything not written (and rememberd) by yourself if some peice of code somewhere can change ToString to return strings backwards (just an example)
    i got the awnser that "ofcourse you whouldnt do anything like that!" well.. lets hope so.. perhaps nothing so extreme but.. still.. Tongue Out
    i do think there is a place for dynamic languages in gp programming and im sure there are lessons to be learned there Smiley but as the sole language for large scale programs? i dont know..

    perhaps in c#4.0 we'll have a "inpure"  (or pure) keyword that marks the purity of a return value Smiley i really do feel that optimal is some kind of middle road between totaly pure and inpure..


    sylvan wrote:
    
    Exactly, the IO monad simply doesn't have a way to get out of it*. It's "sticky" and infects anything it touches.


    cool Smiley thanks for clearing that up Smiley it seems more and more like the built in monads are things that make haskell program practical, just like the .net framework and the java framework makes c# and java practical Smiley i wonder if you could port those monads to c#.. they whould make a nice addition to the .net framework i think Smiley (perhaps they are already in there somewhere?)
  • Thanks, Sylvan.

    I'll see if I can express it in C# myself =).

    I had another (stupid?) idea:

    With all the talk about SOA, would it be possible to hide the complexity of a webservice call in a monad, so that you could combine functions across a network seamlessly?

    /Erik
  • Erik, thanks for the great video. However I have a question, you said that we can integrate FP with imperative programming by ignoring the monads (IO()) from the type. I think this is feasible, but you can't do the other way around right?

  • evildictaitorevildictait​or Devil's advocate
    punkouter wrote:
    

    Erik, thanks for the great video. However I have a question, you said that we can integrate FP with imperative programming by ignoring the monads (IO()) from the type. I think this is feasible, but you can't do the other way around right?



    You mean integrate imperative programming into functional programming by inserting a monad into the type signature? That's certainly feasible.

    For a good example of a functional programming living alongside imperative programming, I'd really recommend having a bash at F# - it's all the power of C# with all the conciseness of Haskell.

    Although I'll never forgive Microsoft for having more people working on Haskell than Sun has working on Java Tongue Out
  • evildictaitor wrote:
    
    punkouter wrote:
    

    Erik, thanks for the great video. However I have a question, you said that we can integrate FP with imperative programming by ignoring the monads (IO()) from the type. I think this is feasible, but you can't do the other way around right?



    You mean integrate imperative programming into functional programming by inserting a monad into the type signature? That's certainly feasible.

    For a good example of a functional programming living alongside imperative programming, I'd really recommend having a bash at F# - it's all the power of C# with all the conciseness of Haskell.

    Although I'll never forgive Microsoft for having more people working on Haskell than Sun has working on Java


    not exactly, i mean, if right now i have a solution which i built using C#, and then suppose I want to make a 'parallel' library using Hashkell/CLR (if there is such a thing), or a FP, Erik said that it can be done by compiling the code to assembly and when I load it up to my C# solution ignoring the monads (IO(), etc ) will make it usable (or maybe, compiling it to CLI will strip the monads altogether). But, how about the other way around? Suppose that my FP library needs to use a business object (not necessarily a complex one, maybe a transfer object) from my C# assembly, can it be done?
  • evildictaitorevildictait​or Devil's advocate
    punkouter wrote:
    not exactly, i mean, if right now i have a solution which i built using C#, and then suppose I want to make a 'parallel' library using Hashkell/CLR (if there is such a thing), or a FP, Erik said that it can be done by compiling the code to assembly and when I load it up to my C# solution ignoring the monads (IO(), etc ) will make it usable (or maybe, compiling it to CLI will strip the monads altogether). But, how about the other way around? Suppose that my FP library needs to use a business object (not necessarily a complex one, maybe a transfer object) from my C# assembly, can it be done?


    There is such a thing - it's called F#, which is a sort of Haskell/CLR thing.

    If you're developing a program in F#, you can also do the reverse, and load a program written in C# or VB.NET - since both are compiled to CIL, F# reflects out the program, and attempts to work out which functions are monoidal and which are computational, and labels them accordingly.

    So in conclusion, yes, an F# application can use C# logic, and a C# solution can use code written in F#.
  • I'm just joining this discussion. But, I can't help being reminded about where I was at in 2002 with C++ claiming that template basesd metaprogramming was revolutionary, when in fact, it was too complicated for most people to use to solve real world problems, and as a result of that (and other things) alot of those developers moved to the much more "designed with the needs of the developer" C# / .NET 1.1.

    I bought pared down generics, made sense to type collections for many reasons. Solved a very common problem.

    Now, I think that some of this stuff wont ever be used as much more than an intellectual curiosity - just like bind2() and whatnot wasn't ever used in C++ on any large scale, stl/boost whatever. They just didnt't meet any real need in particular, other than the needs of the developers to feel smart - and they had developers writing code that DID NOTHING (it was generic though, and it WOULD work on antyhing), but would if only if  YOU used it correctly (Assuming you could figure out how). Which restates the last bit - lots of work went into writing generic libraries which never were used because they were felt to be WAY out of touch with the needs joe blow coder.

    A path siimiar to this killed C++ in my mind. Why do you feel that this wont end in the same result.

    Also - why do you have to even bother the developer with this at all. Can you do this using the intermediate representation? just like out of order execution of CPU instructions.



    The last thing that a developer likes to see are things in his code that he doesnt fully understand - those most likely get rewritten in the real world - since who wants to maintain code they dont understand.

  • ivan_ivan_ g
    I haven't read all of the posts. But what does IO do besides telling me/compiler/whoever that the function is impure?

    For example if I define a function which returns int, but could also have exceptions, than from a pure functional approach I can also get a super set of two. IE = {int U Exception}  and let my funciton return IE, in that case it would be very "honest", wouldn't it? Why not have some suffix? int impure_function(x)....?
  • evildictaitorevildictait​or Devil's advocate
    ivan_ wrote:
    I haven't read all of the posts. But what does IO do besides telling me/compiler/whoever that the function is impure?

    For example if I define a function which returns int, but could also have exceptions, than from a pure functional approach I can also get a super set of two. IE = {int U Exception}  and let my funciton return IE, in that case it would be very "honest", wouldn't it? Why not have some suffix? int impure_function(x)....?


    The point of the type-purity is not to help the programmer (e.g. function suffixing), but to help the compiler to optimise your code.

    With regards to exceptions, they are generally a thing of the imperative world, since undefined behaviour is exceptional behaviour inside of functional languages by default, for example in C#:

    int factorial(int i){
      if(i == 0) return 1;
      return i * factorial(i-1);
    }

    does bad things if you don't check for (i < 0) and throw an exception - factorial(-2) should throw an exception. However, in Haskell:

    Factorial :: Int -> Int
    Factorial i
      | i == 0   = 1
      | i > 1     = i * (Factorial (i-1))

    throws a pattern-match failure (read "argument exception") when you try Factorial -2.

    You can, of course, implement exceptions:

    Factorial :: Int -> (Maybe Int)
    Factorial i
      | i == 0   = Just 1
      | i > 1     = Just (i * (Factorial (i-1)))
      | i < 0     = Nothing  -- Read "exception"

    It seems a lot of people are getting hung up on the notion of "honest". The point is not that the programmer is "honest" about his intentions - it's that the compiler can prove things about the program - the contracts in Haskell and in other functional languages which employ such safeguards cannot be trivially broken, so it guarrantees honesty - breaking it takes some real effort.

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.