Is the primary motivation practical in the sense of invariants or also geared towards compiler tricks?
Good question ... if all the types in use are in immutable collections, and all the computations on those immutable collections are within LINQ computations, could the compiler assume purity and therefore allow laziness and referential transparency by default in those cases?
Sorry for the late reply -- I completely missed the second page
1. Why did they choose the NUGET mechanism for the ImmutableCollections feature.
We chose NuGet for two reasons. Firstly, since Visual Studio 2012, the package manager is available out-of-the-box. And, more importantly in my mind, NuGet has already a huge community.
Is this something we should expect going forward for other features from them
Yes. In fact, we 've already shipped two .NET 4.5 components this way: MEF and TPL Dataflow. They aren't in preview anymore and are treated as any other .NET Framework component that ships together with the redist. For example, it's fully supported.
2. What made them choose using NUGET over VISUAL STUDIO EXTENSIONS.
NuGet is for libraries, Visual Studio Extensions is about tooling, i.e. extending Visual Studio itself. For example, the NuGet Package Manager is a Visual Studio Extension.
3. What is the next step from this preview, or more importantly how do they envision the roadmap for these NUGET delivered features...
We are still figuring out the exact details but our goal is investing more and more in out-of-band delivered components.
4. How do these features finally make there way into .NET framework?
If you're asking if we plan to eventually treat them as a fully supported .NET Framework components, then the answer is yes. If your question is whether we plan on redistributing them via the .NET Framework setup, then the answer is no.
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.