The Cup<T> came from the CLR BCL team when we were implementing generics in CLR V2. Anthony Moore (who was the dev lead for the team at the time) has a brilliantly dry sense of humor. Funiest. Cup. Ever. Jason
Sorry, thinking about this, my wording was not very nice. I guess I just wanted to make the point that regardless of the process you choose inside MS, there is no way that you can make sure to not break some apps out in the wild when you do red bit changes.
David - no worries, your point is valid and well taken.
If this is the sort of servicing you are considering, it will end really, really bad.
I don't believe your scenario is new to NETFX 3.5, however. This is how things work today. For example if vendor C, authoring a 2.0 application, requests the hot fix (QFE) for the error or makes the V2.0 SP that contains the fix a pre-requisite, then installation
of 2.0 application C will break application A. It no longer becomes necessary to construct the transitive case of NETFX 3.5 being involved (app B).
Our choice here is to either try to push for some kind of application isolation (each app gets its own copy of the runtime), or to push our compatibility bar high and move things forward. The blog posts I have linked to above provide some background on the
pro's and con's of these two scenarios. I believe the con's of app local install outweigh the benefits.
In Windows. It just is an elite side-by-side story called skimming.
We actually do have a level of shimming the .NET FX as well. As an example, we implemented deterministic COM apartment initialization in the 2.0 product. We found that too often in the v1.x product the "first one in wins" semantics would break applications
in random ways. We added an app compat switch (e.g. a shim) that allows you to turn this behavior back to the old ways. We have a CLR version binding shim that allows us at managed code startup to check the registry to see if this process has overrides on
which version of the CLR should be loaded. We can deploy the shim to make sure a particular version of an application is not broken by deployment of a new NETFX.
I will argue that shims are a necessary evil but not a silver bullet. Windows fixes a high number of QFE's and do not always catch/shim behavior. The easiest example to consider is the line of business application inside of a corporation (say the WinForms
app you use to fill out employee performance review data, as I just did last week). There is no way for us to ever test that application directly before shipping a bug fix. We all do our best based on coverage and vendor reporting. Some things require a
So I agree there is a healthy tension here between breaking old apps vs the down sides of pure isolation (as outlined in my other blog entries).
1) Never ever do red bit changes to deployed DLLs because it will break apps on customers desktops
The problem here is that the simplest red bit change is just a bug fix. I think it becomes too constraining to never fix bugs in deployed DLLs. What we do instead is try to set a very firm bar on what is acceptable, we do due diligence on reviews, we test
the heck out of it, and we fall back to shimming when we are truly between a rock and a hard place.
3) In particular, don't service deployed binaries with red bit changes in order to enable cool new stuff, like integration of DLINQ with data binding.
This provides insight on another set of tension points. One of the key values of Visual Studio + NETFX is the seamless integration of the entire stack. It really makes it far easier to author your applications. We could solve this problem by releasing a
new SxS NETFX frequently with the features. This is not desirable to many of our customers: enterprises don't want to take big new releases very frequently, and ISV's like to see something get out to a lot of machines so they don't have to redistribute new
versions along the way. Internally we have the pressure of servicing a lot of product versions in parallel (we only have so much capacity). So we need the right kind of balance here.
This is an excellent debate with hard trade offs to make. I can tell you that we wrestled with all the same kinds of questions ourselves before arriving at the plan we have in place now. Later this year we will have out CTP's of the NETFX 3.5 stack. I'm
looking forward to having people try it out and tell us how things are going.
David - You are correct that to use LINQ you will need to provision the 'NETFX 3.5' release on the machine. It was never the intent to have a long term xcopy deployment for LINQ or any other component of the .NET Framework.
There are trade offs to be made in packaging. You can break things up into very small units and try to install them separately, however that explodes the test and servicing matrix. You could try to install the FX pieces with the application itself, but
this leads to other side effects (can I find all copies to service them? what is the working set hit of having many flavors of the same code running on the machine at once?). I outlined some of these kinds of issues at the extreme edge of isolation, static
linking of code (some examples:
The other thing to consider is the relationship between pieces of code in the system. When we ship LINQ, we want to make sure it works well with the existing components. You'll see data binding for example in UI frameworks. I know its hard to believe,
but we often fine bugs or design disconnects when trying to hook up something new to something old (no no, its true ). That requires us to service the old code. Consider the spectrum of solutions people want to write:
1. Author a console app to use LINQ. No problem, this should just work by adding just LINQ to the machine.
2. Manually integrate LINQ into a UI app. This will probably work without touching the old stuff. You will be loading a new compiler runtime library (in the case of VB or other languages that add LINQ support), so that is the most likely culprit to break
existing code and require a bug fix.
3. You want to do data binding with VS rather than plumbing everthing manually. Shoot, found a bug in that UI layer I shipped 1.5 yeas ago, need a fix to make it work. etc.
Given the broad spectrum of solutions people are writing, we need to balance all of the elements (simplicity, servicing, isolation/impact, security, working set).
I simply don't see how a quite high up manager will be able to evaluate how bad the impact on some little red bit change way down in WPF will be to deployed apps
Darn, looks like my pointy hair is showing too much these days. I could tell you about being a dev on the project for many years and writing a bunch of code that is in the engine today, but that's just me trying to sooth my ego
In reality we aren't going to rely on any one person to vet the changes. We are following far more rigorous process (https://blogs.msdn.com/jasonz/archive/2005/04/25/411925.aspx).
Beyond that let's not paper over reality: making any change, no matter how harmless, can break an app. I have seen bugs exposed simply by making the engine perform faster. So your point on the danger of changes is valid. At this point I am not convinced,
however, that going to a point of pure isolation is the best answer on the whole.
On the comparison to Windows, we are following the same kind of guidelines they are. Windows has an even bigger compat challenge in that they really don't have a side by side solution. That means making WIN32 bug for bug compatible with the code written many,
many years ago. We added side by side into .NET precisely for this reason. But it is not a total magic bullet.
Finally let me wrap up by saying that we are evaluating new models for this kind of problem space. In particular we announced at Mix'06 the WPF/E project including managed code support. This problem space is quite interesting and allows us to look at other
models. I hear the feedback from the community, and I do want you to know we are working on ways to get the best balance out of the system.
1. The May CTP of LINQ is using simple copy only because setup was not prepared for that release. Our setup team is working on the net .NET Framework now so some future build will have it integrated.
2. OS support: we are not planning to have the NETFX partially installed on operating systems that cannot support the entire system. There are a couple of reasons for this including simplicity and our desire to reduce the overall test matrix. For this reason,
“NETFX 3.5” will not support W2K as pieces of NETFX 3.0 do not support this sytem.
3. Red Bits: red bits are simply servicing changes (bug fixes in general) to components that have already been installed. When “NETFX 3.5” ships, we’ll know what the final set of red bits changes are. It will be any rolled up bug fixes and a small set of
changes required to help enable the green bits.
4. Packaging. I understand the benefits that come from packaging small pieces of a framework. This is not the overall design the NETFX has taken historically. We believe it is very important to be able to do central servicing of assemblies in the case of
a security issue. It is also easier for a developer to write code that says “do you have NETFX 3.0 on the machine” instead of testing on a much smaller granularity. The trade off is the size of the overall package and the potential for impact of bug fixes
installed by other applications. There are reasonable sides to this argument. I do hear and appreciate your feedback.
5. WPF for “NETFX 3.5”. WPF is just another piece of NETFX. So just like any other component, it will have some red bits changes and will add some new green bits. The final set of features is TBD.
Jason spoke about slow startup times due to JIT compilation and how this is a big target for optimization.. I thought this compilation was done only the first time a new assembly was loaded...? It seems to me to be a negligible issue.
You are correct compliation is done only on first load, and then only for the methods you actually call. Even so, the time it takes is demonstrably longer than simply paging the existing code in from disk. Ideally, pages are also hot and sharable withother
processes. We get our best startup perf through ngen with tuned scenarios (locality of methods).
I have a lot of background material on JIT vs ngen and paging on my blog.
thank you very much. I honestly read about half of it. Though all of this complexity over easy memory management makes me just want to use new and delete.
Good idea, tracking down bit hoses from dangling pointers and double deletes is far surperior <g>
Seriously, no memory allocation system is going to be a silver bullet. I do believe that automatic memory handling in general has been a huge win, and I wouldn't change the decisions we made on the product.
btw, the Dispose pattern and using keyword were created as a result of the dicussion Brian's aticle generated.
1. The white cup on my desk says: Cup<T> a clever idea that Anthony Moore, BCL dev lead, came up with for our Generics release.
2. On the left speaker is the "Checkin Cartman". During V1 we didn't have great checkin automation (the system rocks now). After a long tree freeze, we'd bring the tree back up for checkins by serializing per dev lead. Only the dev lead with the Cartman
could check in.