Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

Will48 Will48
  • Erik Meijer - Functional Programming From First Principles

    I must say I'm really flabbergasted by this. The world of course is immutable.  You can not change anything. If 5 minutes ago x was equal to 5, you can not change this fact, in the real world. But in imperative programming, you can. You just write x=15. Does this act of re-writing a register changes the fact that it had 5 stored in it just a flicker ago? Of course not. You just choose to forget about it. You don't care.

    The real difference between mutable and immutable is the explication of time, which is part of the immutable mode of programming. Everything gets tagged with its time ultimately, and so there's no source for any confusion at all.The only reason imperative programs get confused is because they drop the time tags and hope for the actual execution in actual silicon to be somehow magically in-sync.

    Of course that's silly.

    I'm amazed why would anyone get intentionally confused by that simple example that Erik provides, of storing the value enumerated in a foreach loop, in a list. You just must understand first order principles first, like value vs reference, and understand the semantics of your language. What point is there to using a language if you don't understand its semantics????

    What you're storing in a list there, is _reference_, not value. That is all there is to it. No mysteries there. And since you store a reference that will be looked up later, of course you must understand what scope it is from, and whether it gets reused or a new one is created each time.

    This is really a beginner stuff for anyone proficient in Scheme or Lisp. It has been so, for the last 30-40 years, since the "FUNARG problem" was formulated, and solved. Oh, and "func" is pronounced "thunk" really. 

    It's just shocking to me, what I just heard in this talk. I'm half-way through by now. Shocking. 

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

    Yes, but what if we didn't have full Int to use at the intermediate level? My approach would still work, as in

     

     interpret Zero = Just 0 interpret (Succ x) = interpretPos 1 x interpret (Pred x) = interpretNeg 1 x interpretPos n Zero = Just n interpretPos n (Succ x) = interpretPos (n+1) x interpretPos n (Pred x) = if n>0 then interpretPos (n-1) x
     else interpretNeg 1 x interpretNeg n Zero = if n>0 then Nothing else Just n interpretNeg n (Pred x) = interpretNeg (n+1) x interpretNeg n (Succ x) = if n>0 then interpretNeg (n-1) x else interpretPos 1 x

     

    This way we don't demand the existence of extended domain prior to our defining it (we didn't define negatives above, but we could, now - just using Either instead of Maybe).

     

    What I mean by my reference to monads is that I see it as essential to monads the separation of monad composition timeline and monad execution timeline, which makes optimization (pre-processing) while composing, possible - and that's what my version it doing. Also "monadic" is that my interpretPos/Neg pair encode and carry along the additional data. Or something like that. Smiley

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Type Classes

    ... bringing some bits of [LISP] expressiveness to a [statically] typed setting.

     

    That about sums it up for the last two or three decades of mainstream language development. Smiley

     

    Yes, of course (static) types are a huge difference comparing to run-time orientation of CLisp. One thing all comparers of CLOS and OO would always point out was the multi-dispatch capability of defmethod - and yet here you've shown how to achieve that, even at compile-time through the type resolution of Haskell!

     

    Finding such parallels really helps to clarify concepts, and to bring dicipline even into coding under permissive languages. After all, there's nothing that can't be expressed in a bit of ASM. Smiley It's the insight that we're really after in CS, I think.

     

    Thank you again for the great lectures!

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Type Classes

    So is this, like, the Haskell encoding of CLisp's defmethod?

     

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

    Thanks for clarifying the (Maybe Value) thing for me. Smiley I really did have Meaning and Value conflated.

     

    Your "views" at the end of the post make total sense to me. About the code, I was thinking along the lines of

     

      interpret Zero = Just 0

      interpret (Succ x) = interpret' 1 x

      interpret (Pred x) = interpret' (-1) x

      interpret' n (Succ x) = interpret' (n+1) x

      interpret' n (Pred x) = interpret' (n-1) x

      interpret' n Zero = if n>=0 then Just n else Nothing

     

    This really goes to the separation of timelines (combination time vs execution/run time) as I see it as the essential feature of monads. To do something (here, (1+) ) after the processing is done, or while processing - after combining all the monadic actions, or while combining them.

     

    But it is an embellishement.

  • C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Evolution of an Interpreter

    Hi,

     

    while watching this new lecture, at 8:00 after adding Pred into the simple language interpreter, you say there's no change in its semantic domain. I'd expect there is a change, from type Value = Nat to type Value = Maybe Nat, and the definition for Nat staying the same, as type Nat = Int. Does this make sense?

     

    And, thanks a lot for the lectures! Very interesting stuff, and a clear presentation. Can't wait for the next ones, monads especially. Interpreters are of course the essence of Monads (?) (and vice versa Smiley ). In that light, when you have { interpret (Succ x) = (Just.(+1)) $$ interpret x }, it could've been redefined as an optimizing monad to push the (+1) inside, hoping to catch the rogue Pred early on, making  { interpret (Succ (Pred x))  } always equivalent to { interpret (Pred (Succ x)) }, transforming it on the fly into just { interpret x }. Making { Succ(Pred Zero) } an invalid expression is too operational-minded IMO. I mean it in general, not here in this lecture of course where you have to keep things simple. And partial application would still be needed of course.

  • C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL), 2 of n

    > In this case, you're thinking about vector<unique_ptr<T>> or vector<shared_ptr<T>>.

     

    Thanks for the pointers! Smiley

  • C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL), 2 of n

    yes, but in practice they do.  But Stephan did clarify something to me (that should've been altogether clear anyway) - with indirection the objects indeed won't be stored contiguously. So adding/erasing elements at random positions would improve, but access would become slower. So I guess it depends, what a given vector is mainly used for. If its elements get shuffled a lot by a given algorithm, maybe my scheme would still be beneficient.

  • C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL), 2 of n

    yes memmove() is linear, but with incredibly smaller constant factor (to the point, I was hoping, of insignificance) than assign/copying objects one by one in the loop. My point was to use indirection always (with T** m_begin), thus enabling to call memmove() instead (i.e. T* is scalar just like int, and std::string* is scalar too). Then each erase() will take very small amount of time hopefully. I'm guessing but I think for vectors of lengths in low thousands it would make the "dumb" approach acceptably fast. Of course the smart algo with two iterators doing all the work in one pass is always better. Thanks for the explanations. Smiley

  • C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL), 2 of n

    Here's how erase() is actually implemented in VS2008:

     

     

     

        for (; _First != _Last; ++_Dest, ++_First)

            *_Dest = *_First;

        return (_Dest);

     

     

    So you see, here each element is copied, one by one - and this involves copy constructor in C++ which will actually create new object to be copied (the problem which move semantics came to address). This is obviously unimaginably worse than just calling memmove() once for each 100,000-long block of pointers or so (provided that the vector does actually store pointers to the actual objects somewhere on the heap) - even if pointers get copied one by one inside memmove(), still no temporary object creation/destruction is going on at all. That's exactly what move semantics does. And that's just what memmove() does, isn't it?