Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Windows, Part II - Dave Probert

Download

Right click “Save as…”

In this second part of the video with Windows' kernel architect Dave Probert, we get a little tour of multi-threading and how Windows works at a deep level.

Tags:

Follow the Discussion

  • TaskerrTaskerr This ones a Gem!

    What Dave brings out in this piece reminds me of something that has sadly disappeared from modern computing by way of evolution. It’s that in those early days of the 70’s and 80’s you were more than likely to be an Electronics Engineer as opposed to a programmer. Chicken and egg, which came first the hardware or the software.

    One can sense the potential of multi-core CPU’s but as Dave points out in many senses they have been around for some while at the machine level. The challenge for contemporary programmers is to exploit this new technology and its not going to be easy.

    Dave has a lot more control at the Kernel level than say a C# programmer has with a real time thread. So do we end up with an NT5 style programmers kernel sat atop the machine Kernel simply to control such threads?

    In my experience real time threads are of limited use outside of machine control and mechanical automation, e.g. CNC lathes. The OS systems that control these machines are very similar in philosophy to what is being done in NT5. The only real difference is that real time means literally that. A tooltip must arrive at its destination at the right time or crunch.

  • Good info...

    Hope there's a part 5-6 on this.  This guy is very interesting...  
  • Chris PietschmannCRPietschma​nn Chris Pietschmann

    I love this stuff!! I can't wait to hear more from Dave!

    This type of low level stuff fascinates me. I guess that's why I'm going back to school for Computer Engineering.

  • I'm wondering why Dave praises the NT kernel so much. For things like multitasking and memory management unix seems superior, at least from my own experiences. On my xp box the simple act of copying a file pretty much makes the box a paper weight. But on unix I can have multiple process running and the GUI or app responds right away. On xp I have to wait and GUI or app pretty much gets bogged down until the previous process is done. Also on unix I almost never use swap so everything responds quick because it is in real memory. Xp seems to be VM happy and I'm constantly having to wait on an app to pagefualt into real memory. This is especially true if I switch from one app to another then back ot the originial app. I don't understand this becuase I have more then enough memory for both but xp still puts the first app into pagefile and then pagefualts it back in when I switch back?
  • Taskerr wrote:
    In my experience real time threads are of limited use outside of machine control and mechanical automation, e.g. CNC lathes.


    I would have to disagree Taskerr.  From examples such as QNX I think we can see that not only does real-time have a real benefit for user-interaction latency (essentially the machine never, ever "feels" slow), but it ends up being good for other things like IO, because a given task cannot bind up other tasks if they are prioritized correctly.  True, you may lose throughput, but for an end-user system (and even a server system I would argue) the marginal loss of throughput is well worth the response time.  You can buy more throughput - can you buy more of your own time?
  • billhbillh call -141

    Great video. This brings back memories of the days when I was poring over manuals and tinkering with different sector interleaving schemes on the Apple II. Same timing/latency issues back then: waiting for the information to pass under the read/write head.

  • and how exactly, implementing you file_read->process->print algorithm is supposed to drive to better performance factor, multi-threaded versus multi-process? leaving behind the mechanical conditioning of any device (which btw, it's a comparison constant in this case) and the hdd cache and the io cache, the only potential performance improvement is simply probabilistic... i undersatand that the intent (given the average target audience) was to provide a super simplified example (something like scholars learn in regular phd programs) but i think that mr cutler would have answered this question in a different way (if ever asked to) and the answer is...

    when you create a process, no matter how good you are, you still have to read from the task image (in out case, let's say an exe, talking about the old "hellow.tsk;1" Smiley - reading from a secondary storage cannot be faster than secondary storages (hdds) allow. which means creating processes is naturally slower than creating threads. this was the basic idea and the main reason for wich the thread switching model has been used. now, there are side effects, dead locks, race conditions, etc but i still think it's worthy! that's why modern unices (and the posix standart itself) mention about threads. fibers are another story, but mostly, the peformance boost is coming from being able to naturally cache (implicitly discard) unnecessary secondary storage reads. a kernel architect knows that, regardless the kernel.

    now, what is more interesting is the (aprox) statement "when it comes to symetrical processing we do play with priorities" which means that the scheduler might be hardcoded somehow (have a huge special cases global "symbol" table of some kind) - well, the video is great (i wish you didn't "revised" the third episode) and it would be even greater if low level engineers (from production environments) could access kernel source code, at least in fragments, under strict nda!

    despite is great to expose your kernel at least to students, it makes very little sense to limit kernel visibility to schools. a lot of (actually the BEST) engineers, even they are not working at microsoft (but for microsoft, indirectly) would take huge contructive proffit from being able to see all kernel objets (and algorithms) in source code (it doesn't even need to be actually buildable)

    so, if the intent is to be customer centric, trying to provide each customer the best of the best of the best, you should create a new program, where a certain (low level enough) engineer can say "hey, this is who i am, what i do, i think i need to see the kernel, under strict nda, send me the source code, for, let's say, scheduller)

    i am super confident that (no matter how strange may sound today) will hapen tomorrow, hopefully in our live times! Smiley

    best regards,
    d

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.