Here's another little twist ... Eager evaluation is, in fact, a side effect! Think about it: eager evaluation means that you KNOW function arguments get evaluated BEFORE the function gets called. So, a PURE functional language must be a lazy language (more technically, "non-strict," and
sorry for the confusing lingo: STRICT means non-lazy).
BZZT. My non-sequiter detector went off right here, Brian. I think this is over-reaching, as is the later comment about what laziness provides. I'm not completely convinced that the distinction is merely a matter of taste, but I'm also not persuaded that
a "pure functional" notion that abandons the well-definedness condition of common mathematics is an appropriate alternative.
I've been carrying these two videos around on my laptop since they were posted, and I finally watched them today. (I carried the F# 188.8.131.52 distro around longer but started getting into it last week with 184.108.40.206, the one with #light -- #white in the
The comments are almost as good as the videos. I agree that the demos move quickly and the camera focusing and panning can be tough. As I recall, the demo at lang.net was more difficult because we couldn't see what Don was typing, just the results (and don't
trust my memory for that).
The comments are also very useful in showing where the unfamiliar or the dissonant show up for people.
I've wanted to get back into the cockpit of a functional language for some time now, and it was easy to settle on F# as a place to start. My selection was motivated around its roots in ML and OCaml, which I want to know more about. And more than that, having
the F# system be grounded on .NET provides yet another avenue to making use of all of the libraries and functions available on the CLI. It is sort of a nothing-to-loose investment and I can use F# where it covers a sweet spot for me, and use other programming
systems when that's more appropriate.
As long as it runs on the CLI and inter-working of components works between F# and other .NET languages, this strikes me as very useful. So that's a bonus, for me, over firing up a Scheme compiler or some other not-yet-.NET functional language.
Matt0210: I had the same experience downloading (I never got much over 2.5kB/s on a 1.5mbps DSL line) and I was using Internet Explorer 6.0. This morning I finally killed it after only getting 130MB of the thing downloaded overnight. So I don't have
a local copy for careful review and I haven't managed to watch it either. Bummer. It looks like downloading is throttled and streaming isn't (makes sense), but it could just be congestion.
I gave up because I was avoiding doing my real work out of concern for losing the connection. So I tossed the connection and I'll try again another time.
Knowing the file size in advance would have helped me deal with this. I sent a note to Robert Scoble requesting that for Channel 9 videos.
Orbit, my understanding is that the Microsoft Office 12 native format will use XPS packages to hold the XML parts and other elements (images, etc.). I suppose you could have the word document and an XPS (Reach) Document side-by-side in the same package
if that was useful for shipping around.
The greater value may be that this is also how PowerPoint and Excel will do it, and there is room for many custom document-oriented and maybe developer-oriented uses. (I would love to see it as a carrier for the components of a build/project, for example.)
I think the use of "Paper" is too narrow, but it seems that a lot of the wood behind this particular arrow is oriented to production printing models, although it could be nifty in the multi-function space too. It'll be interesting to see the next cut of the
I have taken a more-careful stroll through the open-source combination case, and added a further
blog entry about it.
Sometimes I think I am making the situation murkier rather than adding any clarification. We'll see how that sits after I've been away from it and maybe get some feedback. Meanwhile, it has been an useful way to play hookie from my studies while convalescing
I am delighted to see this Channel 9 presentation. I share Jean Paoli's vision for the accessibility of cultural, social, and historical artifacts in open formats. That's the real big deal here.
It is great to be given an opportunity to be acquainted with him in this way, along with Brian and his enthusiasm for all of the heavy lifting that it takes to deal with the real complexity of a few billion legacy documents.
After the discussion on the video, I realized that it is a bit more complicated. The pieces of open-sourced code that accomplish the reading and writing in conformance with the schema (isolated in an object of some kind, let us hope), must provide attribution
to the royalty-free license. I think I would segregate that code too, because anyone who modifies it must either preserve the license-required conditions or remove the license statement and not use the code to access the Microsoft format.
I would use that arrangement because I would not want anyone who used an open-source offering of mine to get into difficulty and find themselves in infringement because the license conditions were not preserved in their derivative work. (I have a summary of
the conditions in
a follow-up on my blog. Notice that exactly the same thing must be done in using the OASIS OpenDocument formats and schemas. Although Sun Microsystems does not require an acknowledgment of the license, I would make sure to put something in about that
to protect downstream developers in the same way.)
With luck, we're talking about pieces that someone would either use intact (apart from fixes) or would replace completely in making a derivative work of the open-source portions.
I've been looking at exactly those cases. The Microsoft Office XML Reference Schemas license (the one proposed for the new formats too) does not permit derivative works. Also, the royalty-free patent license works only for the portions of programs that
read and write files that conform precisely to those schemas.
That's not the end of the world, but it is probably why the wording of the FAQ/Q&A response is a little uh, indirect.
Basically, you can not envelop these schemas in an Open-Source Definition compatible license because that would violate the Microsoft license. You can (and must) segregate the Microsoft material in a way where there is no license confusion, sort of like treating
them as redistributables that can only be redistributed under their own licenses but are employed in works that have broader (that is, OSD-compliant) licenses for the added-value parts.
That's not a unique situation. I go into this more elaborately in my analysis of the Copyright part of these licenses
on my blog.
Thinking about it a little more, this should be done even when same-license materials having different authors (that is, copyright holders) are comingled. You need to know who the several copyright holders are in case there is a question or desire to negotiate
a special license.
Scoble was very excited about the excitement around this announcement, and now I can see why.
As a document system and interoperability guy, I must admit this is very exciting. I am particularly taken with the lessons learned from the last format change (to docfiles in Word 97) and the careful use of Zip as a packaging technology for hierarchical inclusion
of content, components, and anything else you want to carry around (including the old format in the test version that Brian demonstrated).
The retrofit of the new format all the way back to Offices 2000, XP, and 2003 is also heartwarming as a powerful move to sustain interoperable reach across generations of the application.
The document-management, content-management folk are not going to miss the value of this, and the comment about Sharepoint appeal is going to catch a lot of attention from those with ideas about other interoperable applications of distributed documents.
This is goodness guys.
(I notice I comment like I am blogging, so now I'll go do that too.)
Great video. Thanks for the blog about it, Jeff, because I might not have checked it out otherwise. 93MB file too. I guess it's time for the "Best of Channel 9 Videos on DVD" series.
In addition to demonstrating the advantage of devlabs, I saw some importance, for myself, in understanding how managed code impacts consideration of software trustworthiness, especially for integration of components from multiple sources. That angle was enough
for me to snag the video for future pondering.
I loved the anecdotal material. In 1961, when I arrived in New York City to do some "Applied Programming Development" work, one job was to figure out how to build a version of RPG so our Univac small-scale systems could run IBM 1401 applications. I had forgotten
the importance of the card-input event/message loop until it was brought up in the video. Scary stuff for a procedural-language guy teethed on FORTRAN (I & II), ALGOL 60, and assembler.