Just FYI, I've tried three times now to download and play this content through the Zune desktop software. Each time I try to play it, it gets about 20 seconds in and then drops out with an error message about the file being "corrupt or not valid".
I'm currently downloading the WMV version directly from the site, which I assume will work.
This presentation is definitely touching on some concepts that I've wanted to understand for a while now. In some ways it's been enlightening. In other ways, I don't think I'm any farther ahead in my understanding.
Even if I don't feel completely enlightened, I'm quite pleased that this has crystalized for me exactly why I struggle to understand these concepts.
Firstly, I often don't know if a particular statement is to be accepted axiomatically, or whether I should be able to deduce it myself based on information presented earlier together with my own logic and intuition. For example the bit about [paraphrasing] "...if there is a demand for an 'A' and with resources 'R' we can satisfy the demand, we can transform it so that there is now an 'I.O.U(A)' together with the resource 'R' and now there is no longer a demand for 'A'...". And the bit about it going back the other way. Are such statements axioms or logical conclusions (or other).
Second, and to a lesser degree, when it gets down to the very fundamental stuff, I sometimes struggle to understand what I have 'in hand'.
Truth: Is set 'A' equal in size to set 'B'?
Me: Sure, I count 3 elements in set 'A' and 3 elements in set 'B'. They're the same size.
Truth:You can't count them.
Me: What? Why?
Truth:Because you don't have numbers.
Truth:See, it's easy, there's this function 'f' defined 'blah blah blah' that is a bijection between set 'A' and set 'B', so...
Me: So I have sets and functions, but not numbers?
Even if I didn't completely understand it, I did quite enjoy it. I like the feeling that enlightenment is within reach. Thanks, and looking forward to part 4.
I am mostly ignorant of C++ (and even more so of 0X), so please forgive me if this makes no sense.
It seems to me that what is accomplished by "std::move" could (and perhaps should) be implied by how the code is written. Cannot the compiler reason that "here is the last place this reference is...referenced, therefore it is safe to use "move" semantics"?
Furthermore, supposing you specify "std::move", could not the compiler issue a strong warning or even an error, should the 'moved' reference be...referenced at a later point?
This comment is in response to your comments on the article about code comments being bad for code clarity. Did you have a chance to read the entire article before the show? I ask because the author does make a comment (admittedly toward the end of the article)
about exceptional cases where comments can actually add value.
I think the real problem with comments is that, historically, commenting code was often presented as simply a Good Thing™ to do. Like so many other things, inexperienced programmers tend to accept this advice and apply it in the most obvious (and unhelpful)
We need to get the message out there that writing good comments is similar to writing good code in that it takes some effort and usually less is more.
I agree with the other responders, very excellent, thank you Dr. Hutton.
I have to say, in a way I was disappointed with the fact that program fusion was so effective at improving the performance of the program. One of the nicest properties of Haskell is that its functions are so very composable. And because Haskell is pervasively
lazy, a lot of unnecessary computation is avoided. It seems to me that "by hand" program fusion runs somewhat counter to function composition.
I wonder if there would be some way of doing this kind of program fusion at compile-time, based on some function-algebra rules engine. Perhaps this relates to Dr. Huttons research.
I'm finding the discussion on type systems very interesting and insightful. Once I started to understand Haskell, I realized that many of my ideas about the trade-offs between static and dynamic type systems were misconceptions. As a result, I now
find it frustrating...exasperating, really, when people claim dynamic type systems to have certain inherent advantages, which aren't advantages of dynamic types, per se, but rather disadvantages of the most well-known implementations of static typing (C++,
Java, C#, etc.).
These thoughts lead me to wonder just what, exactly, *are* the inherent advantages of dynamic type systems. I think Michael Rys's distinction of pessimistic and optimistic systems is the insight I've been seeking. I wouldn't be surprised if this becomes
the canonical distinction in the future.
Perhaps like exoteric, I felt a little dissatisfied with the return type of Parser being "list of..." rather than "Maybe...". I've come to admire the expressive power of Haskell types and this seems like a bit of a hack. Considering, as you pointed out, that
both the list and Maybe types comprise Monads, I am wondering where the advantage exists for using the list type. Monads allow for map, filter, lift operations, no?
Well, that's ironic. I'm loving my new Zune HD, but I was pretty bummed to find a scratch on it (already!) recently. Not sure how it got there, but I suspect it was one of those times I forgot and tossed it in my pocket along with my keys.