Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Windows, NT Cache Manager - Molly Brown - Part II

Download

Right click “Save as…”

Here's the second part of Molly's interview, where she dives down deeper into Windows' cache manager.

Tags:

Follow the Discussion

  • tsilbtsilb Hardware Geek, Multimon, Carputer
    Interesting video, but it raises some questions: How does the cache manager (Cacheman?) interact with the multiple memory controllers commensurate with multi procs or multi cores?  Seems the video was about to address that but kinda skipped over it like a sensitive topic or something. How does the cache manager keep track of which memory controller owns what addresses?  Does it multiply the amount of buffer space or divide the existing buffer space by the number of processors?
  • MinhMinh WOOH!  WOOH!
    How does DMA fit in all this? Or is that something outside of this lib?
  • Minh wrote:
    How does DMA fit in all this? Or is that something outside of this lib?

    It's outside of CC.  The I/O's that read data into the cache are all effectively synchronous (they're not, but paging I/O has to be on a non blocking, non paged path).  DMA is just how the data gets from the disk into RAM.
  • tsilb wrote:
    Interesting video, but it raises some questions: How does the cache manager (Cacheman?) interact with the multiple memory controllers commensurate with multi procs or multi cores?  Seems the video was about to address that but kinda skipped over it like a sensitive topic or something. How does the cache manager keep track of which memory controller owns what addresses?  Does it multiply the amount of buffer space or divide the existing buffer space by the number of processors?

    Unless CC has been rewritten since NT4 (which IS possible, other major kernel mode components (like RDR) have been rewritten), it's irrelevant.

    CC doesn't use buffers per se, instead it uses MM to map a section of the file into memory and then reads the data from the shared memory section.

    Since there's only one physical address space on a multi-proc or multi-core machine, it doesn't matter.  Now NUMA may change this behavior, I'm not sure how CC plays with NUMA.
  • codancodan I didn't do it.

    It was said that the cache manager's lazy writers may take ~8 seconds to flush dirty pages to disk. What measures, if any, are in place to protect unflushed cached data during power outages or hardware failures?

  • Excellent videos.  Nice to see a low level technical one once in a while.  I happen to be optimizing some code that handles multi-hundred meg and sometimes gigabyte files so seeing how things work under the hood has been very interesting and helpful.

  • codan wrote:
    It was said that the cache manager's lazy writers may take ~8 seconds to flush dirty pages to disk. What measures, if any, are in place to protect unflushed cached data during power outages or hardware failures?


    None, for user files. If you want to force a write directly to disk, use FILE_FLAG_WRITE_THROUGH when opening the file, or call FlushFileBuffers. You need to specify FILE_FLAG_NO_BUFFERING as well if you want to be sure that the drive itself does not buffer the operation.

    NTFS logs all operations it performs to filesystem metadata. It keeps track of the latest operation to affect each page of the cached metadata. The memory manager is instructed not to write modified pages (by placing them on a ModifiedNoWrite list) if the log entries for that page have not yet been written. Once the log entries are written (log entries are batched up), the pages affected by those log entries are moved onto the Modified list and can now be written by the lazy writer. If the changes don't make it to disk before the power fails, NTFS can reapply the changes at boot time by redoing the operations recorded in the log.

    NTFS only guarantees that the file system is in a consistent state. It does not guarantee that the file system is in the latest recorded consistent state. If you need a stronger guarantee you need to battery-back your system or otherwise provide a backup power supply.

    Longhorn is apparently to add transaction support to NTFS, partly so that user data can also be protected by logging (also so that full transactional commit/rollback semantics can be applied). This is however an opt-in feature and is likely to be a bit slower than regular disk accesses. You have to trade off speed against the slight risk of power failure.

    Any greater detail is difficult to go into here. To truly understand the cache manager, you really have to understand the memory manager as well. Windows Internals 4th Edition spends 110 pages on the memory manager, and 45 pages on the cache manager. It then has 58 pages on NTFS.
  • CharlesCharles Welcome Change

    Memory Manager will be covered in the not too distant future in this series. Keep in  mind that these videos serve as introductions to the technology and the people behind it. Books are generally really good for highly specific and detailed analysis of complex topics. Video, on the other hand, really isn't. So, Going Deep will only be able to go so far and still keep you wanting to watch!

    C

  • I don't know if anyone else is interested, but I ripped the audio of this video (with Robert's kind permission) and posted it on OurMedia.  You can find it at http://www.ourmedia.org/node/10031
  • Excellent Molly.  I like this stuff just wish I was better. Indeed some of the tuff problems are because of their time to reproduce.  Just some thoughts: How about trying to do kernel code coverage with the following:
    (1) Sometimes it is the permutation of aquiring synchronization objects.  How about varying the kernel with virtualization technique that allows a restart in the kernel where we need to see the permutations of code taking with shared synchronization objects. Get the code, or threads all waiting to acquire the shared synchronization objects then, take a snap shot of the kernel, now allow one permutation to continue, etc. If ok go back to the snap shot of the kernel, get it running or looping for release, now allow a different permuation to continue, etc. Just some thoughts on this...
    (2) How do we analyze code coverage - well use a static analysis code to replace the acquiring of shared synchronization objects with a looping on the acquire such the all the permutations can be seen to run.  Ok.  Now we then have data as input - well maybe the boundary conditions on data can be feed into this  simulator as well.  Just some thoughts. 
    (3) Also you mentioned the intrusion principle that changes timing such the bug so away, effects the permutations... We how about some hardware that can be put into the memory accesses between the CPUs and physical memory - like a PCI bridge chip - but will add debugging info with flows of memory accesses and data and not change the timing.  I still believe hardware can help here alot in ways.
    Just one more comment - MORE of the neat free stuff please - also why are the e-classes on development so expensive.  I like them all.  Why not allow them with a MSDN subscription.  The more people use Microsoft languages the better for Microsoft.

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.