and how exactly, implementing you file_read->process->print algorithm is supposed to drive to better performance factor, multi-threaded versus multi-process? leaving behind the mechanical conditioning of any device (which btw, it's a comparison constant
in this case) and the hdd cache and the io cache, the only potential performance improvement is simply probabilistic... i undersatand that the intent (given the average target audience) was to provide a super simplified example (something like scholars learn
in regular phd programs) but i think that mr cutler would have answered this question in a different way (if ever asked to) and the answer is...
when you create a process, no matter how good you are, you still have to read from the task image (in out case, let's say an exe, talking about the old "hellow.tsk;1" - reading from a secondary storage cannot be faster than secondary storages (hdds) allow.
which means creating processes is naturally slower than creating threads. this was the basic idea and the main reason for wich the thread switching model has been used. now, there are side effects, dead locks, race conditions, etc but i still think it's worthy!
that's why modern unices (and the posix standart itself) mention about threads. fibers are another story, but mostly, the peformance boost is coming from being able to naturally cache (implicitly discard) unnecessary secondary storage reads. a kernel architect
knows that, regardless the kernel.
now, what is more interesting is the (aprox) statement "when it comes to symetrical processing we do play with priorities" which means that the scheduler might be hardcoded somehow (have a huge special cases global "symbol" table of some kind) - well, the video
is great (i wish you didn't "revised" the third episode) and it would be even greater if low level engineers (from production environments) could access kernel source code, at least in fragments, under strict nda!
despite is great to expose your kernel at least to students, it makes very little sense to limit kernel visibility to schools. a lot of (actually the BEST) engineers, even they are not working at microsoft (but for microsoft, indirectly) would take huge contructive
proffit from being able to see all kernel objets (and algorithms) in source code (it doesn't even need to be actually buildable)
so, if the intent is to be customer centric, trying to provide each customer the best of the best of the best, you should create a new program, where a certain (low level enough) engineer can say "hey, this is who i am, what i do, i think i need to see the
kernel, under strict nda, send me the source code, for, let's say, scheduller)
i am super confident that (no matter how strange may sound today) will hapen tomorrow, hopefully in our live times!
I would like to know about your initiatives (if any) of sharing Windows Kernel source code and why not, other subsystems (.net framework, managed and native compilers, office itself, sql server).
What can be done, if I, a Microsoft dedicated developer, wanted to build all these products in-house, of course, for cognitive purpose ONLY, under STRICT NDA. Is there a chance to use your source code, obviously in good/mutual faith, from outside Redmond? I
know that some of your kernel guys work remote, but they are full time Microsoft employees. What do you do to cover the (relatively small, yet important) segment of customers who have enough knowledge and skills to benefit directly from your source code, obviously
beyond the regular benefits coming from consuming binaries. Do you have (strictly NDA-ed) private repositories where developers can go, be able to branch, check-out, build, understand, extend and check back in Microsoft core source code. Same question about
special access to your bug tracking systems. Same question about white (or at least gray) box testing your products, extending your unit testing, automation, propose fixes, etc - my question targets kernel and compiler engineers in first place.
Do you have a program which states the precise conditions an individual Software Engineer or an Organization have to meet in order to be allowed to see, build from, extend the source code of Microsoft core technologies?
I'd really appreciate your feedback, either way. Thanks!
Well, the initial perception about Phoenix was that it's going to be a common backend for both register based and stack based frontends, which, once plugged into different frontends, supposed to allow both managed and native targets
for the entire language family under the generous umbrella of MSVC linker. few practical examples:
1. Generate 100% native (register based) code from c# and vb (finally allow software in a box / commercial software manufacturers to use languages beyond c++) - even allow the .net framework itself to have a native (register based) incarnation. 2. Generate hybrid application from any language (in the family) - which would allow natural evolution of existing native and managed application, for any languages in the family (today only MSVC++ allow that) 3. Allow 3rd party compiler manufacturers to expose their frontends results to a "standard" backend, with clear dual intent, without having to worry about backends, linkers, etc
Now, I do happen to know that this is not a walk in the park. it never was, that's why Phoenix was born in MS Research, but still, I see obvious (long term strategic) advantages in implementing the initial plan, the first beneficiaries being:
1. The Microsoft compiler teams 2. 3rd party compiler manufacturers (Borland, RemObjects, etc) 3. At last but not at least, the Customers, compiler end-users, developers all over.
I can only hope that static analysis framework is only a first step into this direction and that Visual Studio 2010 will include some of these ideals, empowering the Microsoft Platform and Developer Tools experiences even more!