The real difference between mutable and immutable is the explication of time, which is part of the immutable mode of programming. Everything gets tagged with its time ultimately, and so there's no source for any confusion at all.The only reason imperative programs get confused is because they drop the time tags and hope for the actual execution in actual silicon to be somehow magically in-sync.

Of course that's silly.

I'm amazed why would anyone get intentionally confused by that simple example that Erik provides, of storing the value enumerated in a foreach loop, in a list. You just must understand first order principles first, like value vs reference, and understand the semantics of your language. What point is there to using a language if you don't understand its semantics????

What you're storing in a list there, is _reference_, not value. That is all there is to it. No mysteries there. And since you store a reference that will be looked up later, of course you must understand what scope it is from, and whether it gets reused or a new one is created each time.

This is really a beginner stuff for anyone proficient in Scheme or Lisp. It has been so, for the last 30-40 years, since the "FUNARG problem" was formulated, and solved. Oh, and "func" is pronounced "thunk" really.

It's just shocking to me, what I just heard in this talk. I'm half-way through by now. Shocking.

posted by Will48

]]>

interpret Zero = Just 0 interpret (Succ x) = interpretPos 1 x interpret (Pred x) = interpretNeg 1 x interpretPos n Zero = Just n interpretPos n (Succ x) = interpretPos (n+1) x interpretPos n (Pred x) = if n>0 then interpretPos (n-1) x else interpretNeg 1 x interpretNeg n Zero = if n>0 then Nothing else Just n interpretNeg n (Pred x) = interpretNeg (n+1) x interpretNeg n (Succ x) = if n>0 then interpretNeg (n-1) x else interpretPos 1 x

This way we don't demand the existence of extended domain prior to our defining it (we
*didn't* define negatives above, but we could, now - just using Either instead of Maybe).

What I mean by my reference to monads is that I see it as essential to monads the separation of monad composition timeline and monad execution timeline, which makes optimization (pre-processing) while composing, possible - and that's what my version it doing. Also "monadic" is that my interpretPos/Neg pair encode and carry along the additional data. Or something like that.

posted by Will48

]]>

That about sums it up for the last two or three decades of mainstream language development.

Yes, of course (static) types are a huge difference comparing to run-time orientation of CLisp. One thing all comparers of CLOS and OO would always point out was the multi-dispatch capability of defmethod - and yet here you've shown how to achieve that, even at compile-time through the type resolution of Haskell!

Finding such parallels really helps to clarify concepts, and to bring dicipline even into coding under permissive languages. After all, there's nothing that can't be expressed in a bit of ASM. It's the
*insight* that we're really after in CS, I think.

Thank you again for the great lectures!

posted by Will48

]]>

posted by Will48

]]>

Your "views" at the end of the post make total sense to me. About the code, I was thinking along the lines of

interpret Zero = Just 0

interpret (Succ x) = interpret' 1 x

interpret (Pred x) = interpret' (-1) x

interpret' n (Succ x) = interpret' (n+1) x

interpret' n (Pred x) = interpret' (n-1) x

interpret' n Zero = if n>=0 then Just n else Nothing

This really goes to the separation of *timelines *(*combination *
time vs *execution/run *time) as I see it as the essential feature of monads. To do something (here, (1+) )
*after* the processing is done, or *while* processing - *after *
combining all the monadic actions, or *while *combining them.

But it *is *an embellishement.

posted by Will48

]]>

while watching this new lecture, at 8:00 after adding Pred into the simple language interpreter, you say there's no change in its semantic domain. I'd expect there is a change, from type Value = Nat to type Value = Maybe Nat, and the definition for Nat staying the same, as type Nat = Int. Does this make sense?

And, thanks a lot for the lectures! Very interesting stuff, and a clear presentation. Can't wait for the next ones, monads especially. Interpreters are of course the essence of Monads (?) (and vice versa ). In that light, when you have { interpret (Succ x) = (Just.(+1)) $$ interpret x }, it could've been redefined as an optimizing monad to push the (+1) inside, hoping to catch the rogue Pred early on, making { interpret (Succ (Pred x)) } always equivalent to { interpret (Pred (Succ x)) }, transforming it on the fly into just { interpret x }. Making { Succ(Pred Zero) } an invalid expression is too operational-minded IMO. I mean it in general, not here in this lecture of course where you have to keep things simple. And partial application would still be needed of course.

posted by Will48

]]>

Thanks for the pointers!

posted by Will48

]]>posted by Will48

]]>posted by Will48

]]>

for (; _First != _Last; ++_Dest, ++_First)

*_Dest = *_First;

return (_Dest);

So you see, here each element is copied, one by one - and this involves copy constructor in C++ which will actually create new object to be copied (the problem which move semantics came to address). This is obviously unimaginably worse than just calling memmove() once for each 100,000-long block of pointers or so (provided that the vector does actually store pointers to the actual objects somewhere on the heap) - even if pointers get copied one by one inside memmove(), still no temporary object creation/destruction is going on at all. That's exactly what move semantics does. And that's just what memmove() does, isn't it?

posted by Will48

]]>posted by Will48

]]>posted by Will48

]]>

why would vector need a linear time (38:10) to re-condense itself after erasure of an element? It is contiguous, and all it holds are pointers - why couldn't memmove() be used just to move them all back one notch at once, in constant time? Of course this is only for the case of non-inlined values, where you have an extra indirection level necessary for that, just like for the rvalue references. I guess in its quest for "efficiency" STL stores "light" values inside vector cells themselves, instead of boxing them up and storing just a pointer to the actual value on heap. Still, the implementation could distinguish between the two cases, right?

posted by Will48

]]>

Array read is a producer: give it an Int and it gives you an Apple. Now, A <: F => (Int->A) <: (Int->F). Wherever we use (Int->F), it gives us Fruits, so we must be prepared to handle Fruits there. But it's OK to use (Int->A) instead 'coz it'll give us Apples, which can go instead of Fruits always.

With consumers it's the other way around. A<:F => (F->Int) <: (A->Int). Wherever we use Apples consumer, (A->Int), it is prepared to use Apples. So it means we must be supplying it with Apples. So we can use fruits consumer (F->Int) in its place, because this fruits consumer can always use Apples instead of Fruits.

Graphically, we can envision the producers/consumers as pipes with a certain diameter - small for apples, bigger for fruits. There are many more fruits than there are apples - bananas too etc. Now, for a wide inlet of fruits consumer (us, in the 1st case), it can just as well handle all input from a narrower outlet of an apples producer. In the 2nd case, the narrow outlet of apples producer (us) can just as well go into a wider inlet of fruits consumer in general.

Array read is a producer; array write is a consumer. Does this make sense?

EDIT: and in general, covariant, when translated from Latin I gess, just means "changing with the direction of change", and contravariant means "changing in the opposite direction of change". Say we do something that enlarges one thing, and another one grows too, than it is covariant to the first w.r.t. our action. If it consistently shrinks, we say it's contravariant to the first w.r.t. our action.

For example, imagine you travel from city A to city B in one straight line. The further you go along the route, the further you are from the city A, so your distance from city A is covariant with the distance you travel in the car. But the distance to city B is contravariant to it - it grows smaller as you travel. I guess a "travel operator" in its covariant form will reflect the enlargement of the distance as measured from the origin point. But in its contravariant form it will show the shrinkage of distance as measured to the destination.

Or take rotation: say we have a point on a plane, in some coordinate system. Now let's apply rotation to that plane. Point's coordinates will change WITH the plane's rotation, or covariantly. But what if we apply rotation to the coordinate system instead? Now the point's coordinates will change _in the opposite direction_ to the rotation, or contravariantly. That's where these terms come into physics from.

posted by Will48

]]>