@PerfectPhase: Windows Server itself has some pretty cool file and block-level de-dupe features. So there's probably additional savings that can be found there. Basically, everyone from the developer to the guy slotting servers into the rack need to get better at making their piece of the stack a little better, and the result is pretty big.
We use that on the scale-out file server cluster we have behind our Hyper-V testing cluster, works really well. Saves us a lot of space without having to use a SAN to get the same effect.
Again depends on your deployment model, you could bare metal deploy/PXE boot your host, but I'd guess that most large deployments will be based on VMs, as that at least lets you put a hard(er) security boundary between multiple tenants without having to dedicate a full machine to them.
Interestingly in the Project Centennial talk at build it was mentioned that the AppX store in windows 10 does de-dup for files inside an appx package, so if I have 100 copies of the same Newtonsoft.JSON dll on my box, will only use the disk space once.
Wonder if they'll apply the same option to their container store.
Need to be careful here,
C# is an ECMA/ISO standard.
Only the CoreCLR and CoreFX is truly open source (MIT)
Things like WPF are released as MS-RSL http://referencesource.microsoft.com/license.html which while source open isn't truly open source.
The desktop JIT is closed sourced, but uses the open source RyuJIT for it's 64bit jitter.
and so on....
I get that bashing Microsoft is a fun trend in these forums, but I really can't understand why letting small companies choose to deploy to server environments more reliably, cheaply and effectively is controversial. If you want to deploy to Windows Server 2016, you can now use containers. If you prefer not to, don't. Simple.
And that's the main point isn't it, if Microsoft (or anyone) decide to implement something, it doesn't mean that you have use it, or that it some how devalues the other options.
A good engineer is aware of what's around and makes his choices based on information, research and environment, it doesn't mean those choices are going to be the same as everyone else.
You can very easily get into splitting hairs on this. Fanbaby said containers run more efficiently, but did not qualify that.
A single container running on a single VM host OS, will probably, by unmeasurably small factor, run slower than if the app was deployed directly on host OS
Hypervisior->Host OS->Container vs Hypervisior->Host OS->app
But running multiple containers on a single host OS makes far more efficient use of overall system resources
Hypervisior->(Host OS->app),(Host OS->app),(Host OS->app),(Host OS->app)
YMMV, its just good to be aware of what the options are, even if you don't use them now.
All containers are dervied from images, and images are built using a Dockerfile. Even if your IDE will magically automate building things for you, it's worth learning how things actually work, so I suspect prudent .NET developers will actually do things like read the Docker documentation or other research if they intend to use containers. A developer that relies too much on magical thinking is going to have a bad time if things ever go wrong.
Compared to understanding the problem domain of my application, working out how Docker/containers work is quite trivial; be that windows or Linux.
Can't answer for you, but for me...
Because I use more of .NET than is included in the CoreCLR and CoreFX, which are still quite anaemic at this time.
Because that is what my customers use and are asking for.