@buenos:We only support 14.04 in this version. It's HIGHLY recommended that you do NOT upgrade to 16.04 at this time: We simply haven't had time to work on improving our WSL implementation to support the new changes in 16.04 yet.
Sr. Program Manager at Microsoft, making the Windows Command-Line cool again, and changing the world by bringing Bash & Linux command-line tools to Windows.
How did you get fork(2) (or clone(2)) working well enough to support Linux binaries?
By implementing a new "PicoProcess" infrastructure within the Windows kernel that allows us to create secure, lightweight PicoProcesses lightning quick!
So, if you're in a mood to share details, I would like to know how you implemented fork(2) and clone(2) under the NT process model (which, due to CreateProcess(), mandates that no process will ever have access to any state of its parent as an implicit operation), how the memory management works, whether it properly implements shared/copy-on-write pages, the whole works.
Yes, our fork() implements true copy-on-write semantics. You think we'd half-bake this thing? ;)
I'd beg you to open-source the kernel driver that implements the Subsystem for Linux
No plans to do so at this time, but we have hear the frequent request.
I'm sorry that I can't choke down the bitterness, here. Microsoft has ill-served the communities that have wanted to migrate from other platforms for a long time, and it continues to do so. It has dropped technologies and left its developers in the lurch many times over its history, and it continues to do so by abandoning the support of those technologies without open-sourcing them. I wish that I could have the benefits of Windows (like real ACLs on processes instead of relying on POSIX abandoned-draft permissions, or service logins that are sane instead of relying on systemd or its predecessor of init scripts, or a more solid driver architecture) as incremental improvements to what Linux has created thus far
... but I'm too afraid of Microsoft's old "embrace, extend, abandon" approach to ever be fully comfortable with it.
In case you've not noticed, Microsoft today operates almost entirely differently to Microsoft of old, and is run by entirely different people.
So another "Interix" or "Posix for windows services". A bit annoying to see you two pretend like world has never seen this techonology, and apeing over it like it's best thing after sliced bread.
No, this is not like POSIX/Interix/SFU - they didn't run unmodified *NIX binaries - they required apps to be built from source. This is often an issue when one has a system but not all the code or the configuration to build it.
Anyway cool that MS is getting it back. Would be even nicer if they didn't focused on a certain distro, and instead been more generic so that users themslve can configure system more fine grained (for example I prefer Arch :)).
WSL has been built to be distro-agnostic, but we had to start somewhere, so we chose one of the dev community's most popular distro's, Ubuntu. We are open to exploring other distro's in the future, once we've got a really solid base and way to support multiple syscalls abstractions, etc.
Anyway, most of *nix/userspace applications this days exist natively for windows so I see no real advantage to run elf-binaries instead of native windows binaries when those exist. However in some cases where those does not exist being able to run linux binaries might be a positive thing.
We're deliberately focusing on developer scenarios because this is a major pain point: How do you build and test Ruby/node/Java/etc. code and/or packages that depend on Linux binaries and behaviors? And, as you recognize, how do we support Linux tools that aren't available on Windows?
While there are plenty of advantages to enabling your project/library/tool to work well on Windows as well as *NIX, we simply can't expect all *NIX code to be ported to directly support Windows.
@Paolo: As I stated above:
POSIX / Interix / SFU required code to be rebuilt locally in order to run. This was a challenge for systems where the source is unavailable and/or no longer builds.
Similarly, Cygwin's tools are compiled to Win32 executables, and so integrates well with the rest of Windows, but often struggles to sufficiently mimic Linux' behavior resulting in many OSS developer tools and projects, particularly those with hard dependencies on Linux behaviors or binaries, failing to work correctly on Windows.
This is where WSL provides value - because WSL natively runs unmodified Linux ELF binaries , within an environment that behaves just like Linux user-mode environment does, many projects that currently fail to build and/or run on Windows, work as expected when run on Bash on WSL.
@Karl Botts:Cygwin is a great toolset that, because the tools are compiled to Win32 executables, integrates well with the rest of Windows.
However, Cygwin often struggles to sufficiently mimic Linux' behavior resulting in many OSS developer tools and projects, particularly those with hard dependencies on Linux behaviors or binaries, failing to work correctly on Windows.
This is where WSL provides value - because WSL runs unmodified Linux ELF binaries natively, within an environment that behaves just like Linux user-mode environment does, many projects that currently fail to build and/or run on Windows, work as expected when run on Bash on WSL.
PowerShell is a fabulously powerful command-line shell and toolset. It is capable of doing things that Bash is just not able to because PowerShell tools and commandlets can exchange collections and graphs of objects rather than serializing text. Also, PowerShell integrates deeply with Windows platforms and technologies, making it a very powerful tool for administering and configuring most things in the Microsoft platform & cloud ecosystem.
While PowerShell's script is a little different from Bash script, it's not a million miles away - learning PowerShell doesn't take long if you already know Bash. PowerShell users often miss some of the capabilities of PowerShell when moving to Bash, however.
@BitCrazed:The problem with piping objects is that it's MUCH harder to figure out why things aren't working. A systems enginner and I tried activating 10,000+ licenses on Office 365 recently (following Microsoft's guide to the letter) using PowerShell and it didn't work. It took WAY too much time to figure out how to get this to function.
I never have that much trouble with bash since it supports non-object-oriented linear scripts which I find tremendously easier to sequentially step through and debug. With any *nix shell I can see the output of every command every single step of the way. Of course this could be due to the fact that I have a background in *nix systems administration and extensive use of regular expressions and not "programming".
Yes, this is very likely to be the "gap" for you: PowerShell script is a little different to Bash script. They're not a million miles apart syntactically, but there is a gap which is easily bridged by a little research, learning and practice.
This will be the same for PowerShell users who may need to learn Bash - they'll miss a lot of the power of serializing objects between tools/commands and an increased reliance on sed, awk, etc., to transform streams of text between tools.
Certainly for larger projects object oriented is the way to go, but for simple (extract info) & (execute command using info) it always seems to require more code, make things more complicated, and harder to debug.
Another thing you'll find with PowerShell is that it appears more syntactically verbose. However, what you learn over time is that the full parameter names can often be abbreviated to just one or two characters. Also, longer command names can be aliased to abbreviated versions of your own choosing. Over time, most PowerShell users end up using highly abbreviated commands and parameters when typing commands interactively, but usually write scripts using full command and parameter names so that it's easier for others to understand what the script is doing without having to learn the original author's collection of abbreviations etc.