I would like to know about your initiatives (if any) of sharing Windows Kernel source code and why not, other subsystems (.net framework, managed and native compilers, office itself, sql server).
What can be done, if I, a Microsoft dedicated developer, wanted to build all these products in-house, of course, for cognitive purpose ONLY, under STRICT NDA. Is there a chance to use your source code, obviously in good/mutual faith, from outside Redmond? I
know that some of your kernel guys work remote, but they are full time Microsoft employees. What do you do to cover the (relatively small, yet important) segment of customers who have enough knowledge and skills to benefit directly from your source code, obviously
beyond the regular benefits coming from consuming binaries. Do you have (strictly NDA-ed) private repositories where developers can go, be able to branch, check-out, build, understand, extend and check back in Microsoft core source code. Same question about
special access to your bug tracking systems. Same question about white (or at least gray) box testing your products, extending your unit testing, automation, propose fixes, etc - my question targets kernel and compiler engineers in first place.
Do you have a program which states the precise conditions an individual Software Engineer or an Organization have to meet in order to be allowed to see, build from, extend the source code of Microsoft core technologies?
I'd really appreciate your feedback, either way. Thanks!
For educational purposes you can download the
Windows Research Kernel which gives you kernel source for a good part of the Server 2003 kernel. You can build this with Phoenix if you want to experiment with OS/Compiler interactions (though if you do so in the near future, I recommend using the VS2005
based June 2007 SDK, since the newer April 2008 SDK exposes some source issues in the WRK code). And the
SSCLI will give you a pretty clear idea of what a good part of the .Net 2.0 code looks like.
As far as building Phoenix yourself, or building other microsoft products, from what I know it's really up to each product group to decided if/when/how to provide access to source. I don't know if there is an umbrella program that applies to the entire company
but I suspect there isn't any such thing.
As is typical with such things there are many factors and stakeholders involved. For example, we have thought about including portions of the compiler test suite with the SDK but a goodly number of test cases have been licensed from third parties or are based
on code given to us with other sorts of restrictions. Just sorting out what we can give out is a major effort.
Well, the initial perception about Phoenix was that it's going to be a common backend for both register based and stack based frontends, which, once plugged into different frontends, supposed to allow both managed and native targets
for the entire language family under the generous umbrella of MSVC linker. few practical examples:
1. Generate 100% native (register based) code from c# and vb (finally allow software in a box / commercial software manufacturers to use languages beyond c++) - even allow the .net framework itself to have a native (register based) incarnation. 2. Generate hybrid application from any language (in the family) - which would allow natural evolution of existing native and managed application, for any languages in the family (today only MSVC++ allow that) 3. Allow 3rd party compiler manufacturers to expose their frontends results to a "standard" backend, with clear dual intent, without having to worry about backends, linkers, etc
Now, I do happen to know that this is not a walk in the park. it never was, that's why Phoenix was born in MS Research, but still, I see obvious (long term strategic) advantages in implementing the initial plan, the first beneficiaries being:
1. The Microsoft compiler teams 2. 3rd party compiler manufacturers (Borland, RemObjects, etc) 3. At last but not at least, the Customers, compiler end-users, developers all over.
I can only hope that static analysis framework is only a first step into this direction and that Visual Studio 2010 will include some of these ideals, empowering the Microsoft Platform and Developer Tools experiences even more!
Best Regards, Daniel
I'm glad to see that some of our vision resonates with you. There's a lot I would like to say about where we are headed and what might be possible in future product releases, but I'm going to have to leave things up in the air for now. Let's just say that there
are a lot of cool things that can be done -- which ones of those happen and when is still being sorted out.
AndyA wrote: I've always wanted to do an IL-to-IL optimizer. While the jit does a good job it doesn't have time for in-depth analysis, and there are a number of things you can do upstream to boost performance.
And that's what all optimization freaks are craving for. And to get conceptual simplicity whilst not sacrificing performance too much [declarative programmer, imperative compiler].
I haven't studied IL and IR, but would the IR actually be a better IL? It sounds like it could be.
If IL is at a lower level than IR, and if an IL-IL optimizer has to maybe abstract up to IR, then wouldn't it be better to just stay with IR - depending on the effort required to go IR->IL.
I wonder if the TCPA could be used to secure highly optimized snapshots of compiled code [in encrypted files] so the JITr could effectively be relieved of a lot of up-front work. Of course there's NGen which might do some optimizations up-front.
AndyA wrote: As far as what all those cores will be doing -- I expect we will find good ways to employ them to directly address user problems. Phoenix itself can profitably use 6-8 cores, and with a bit more work we should be able to scale even higher.
AndyA wrote: It may be that the world of code is more dynamic in the future, but I thought that 10 years ago when I worked on a big static compiler and things haven't really changed that much.
I didn't mean in the sense of dynamic languages (necessarily), more in the sense of JIT'd bytecode.
MSIL/CIL/IL is the way it is for a couple of reasons -- it has a compact encoding, it is relatively straightforward to translate to machine code, and its semantics are carefully specified so that verification is possble. Phoenix IR has a much different set
of design parameters and so has different attributes: a representation that is somewhat redundant but can represent rich relationships among the instructions in a program, the flexibilty to represent programs at several different semantic levels (eg HIR, LIR),
and the expressive power to describe most of the popular machine architectures.
A bytecode that has some of the attributes of our IR makes a lot of sense -- the ability to specify a register set, the ability to annotate the IR with useful derived facts (perhaps, as you note, provably correct ones), the ability to mix semantic levels (as
Phoenix IR allows LIR/HIR islands in HIR/LIR). If you can have all that and retain the benefits of MSIL then it would be something really interesting.
And I'm with you on that last comment -- around 1998 or so I was convinced jitted code was going to take over the world.
Having processor makers produce plugins for Phoenix sounds quite compelling, for Phoenix itself, Microsoft, the processor makers and the users.
And it would be great with more static IL optimizations. On the other hand, in the parallel world of the future, there should be enough cores to continuously GC, profile and analyze code, so one wonders how much Phoenix can adapt to dynamic compilation and
Anyway, cool stuff.
I've always wanted to do an IL-to-IL optimizer. While the jit does a good job it doesn't have time for in-depth analysis, and there are a number of things you can do upstream to boost performance.
As far as what all those cores will be doing -- I expect we will find good ways to employ them to directly address user problems. Phoenix itself can profitably use 6-8 cores, and with a bit more work we should be able to scale even higher. It may be that the
world of code is more dynamic in the future, but I thought that 10 years ago when I worked on a big static compiler and things haven't really changed that much.
The raising an executable to IR sounds very interesting. Presumably this could allow you to migrate executables to another CPU architecture (much like Rosetta in MacOS) if that ever became an issue for Windows.
Binary translation is certainly doable, and the machine-specific part of Phoenix is extensible (though this is not yet as easy to do as we would like it to be). Sounds like a fun project for somebody to try...
GamlerHart wrote:This "show-me-the-path-of-the-variable-setting" is REALLY cool. How many time you've debuged the same small function, because you got another value than expected. With this feature, you can quickly see it, without re-running the function. You've an overview
how you landed at the state you're now in.
Sadly, I'm not really in the C++/native word, but in the Java / Managed World.
I like that too. And ability to Step backwards and reverse the state would be so cool.
Look forward to new compiler in the product. Thanks much folks.
BTW - small correction. The managed class is "StringBuilder" (not stringbuffer).
Reverse debugging is indeed really cool, but there's a sizeable gap between what we can do with slicing and actually being able to run the program backwards. Still, the idea is compelling...
We should be dislosing release plans sometime in the not-too-distant future. But you don't have to wait until then to have fun with Phoenix -- download the SDK and you'll get a version of Phoenix that plugs right into the VS2008 C++ toolchain.