Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Animadei

Animadei Animadei

Niner since 2006

  • Landy Wang - Windows Memory Manager

    androidi wrote:
    I haven't tested whether there's a real perf benefit for this, but have always thought that it's optimal to have one drive/array for system and apps and second for pagefile and backups/infrequently accessed data. I am talking about what is the optimal 2 hdd setup for both performance and some redundancy - special arrangement like this or some special RAID that gives both perf + redundancy on just 2 drives?


    Haha you're fanny! Just add more memory and avoid page files altogether. There's a product like SuperCache that does Disk I/O level caching. All of course at the expense of reserving more memory for these operations. Personally I gave up page files after disk thrashing really put a burden on performance, especially when we're talking about compiling and linking 800MB worth of source codes every 15 minutes. The only reason you get "more" performance boost by putting the paging file on the second drive is that whenever you access your system the first disk is busy and the second drive is free to work. If you really think about this, to really get the benefits, at least for IDE "technology", you needed to put each device on its own separate channel. Basically one disk per ribbon. But then again, reality hits you again. We're talking about many orders of magnitudes of slowness because the disk is just so slow. You should look into borrowing memory from other systems over the network, if you're kinda cheap, since accessing them is still much faster than your local disks.
  • Landy Wang - Windows Memory Manager

    Beer28 wrote:
    Does the memory manager work with the kernel scheduler?

    If a process releases it's timeslice as soon as it's started because it's a service or never accesses a good portion of memory or is in a sleep wait state, is it's heap more likely to be paged out?

    What about non-paged driver memory? How does the memory manager handle that? ISR's can't use paged memory, are they still handled by the memory manager on windows TM?


    Heap is always most likely to paged out. That's why we have working-sets to keep processes happy.

    ISRs cannot use paged memory as a general rule, and a good one if you care about performance. No fun waiting for the file system to respond in at least 3 orders of magnitude delays. With that said, non-paged memory is simple: Don't use it if you don't absolutely need it. These are non-paged and thusly subtracted from your total free pool. When you unload your driver, all the allocated memory goes back to the system.

    The memory manager only cares about allotting free pages and reclaiming pages that are committed, or pages from working-sets. People here are confusing paging for swapping and vice versa. Swapping is swapping a good portion of memory to disk to free memory. Paging is swapping out a "page" of memory, in i386 world that's typically 4KB.

    By the kernel scheduler, you mean thread scheduler? These are two different concepts. The VMM probably implements LRU or something of that sort or use a clock algorithm. Who knows? Smiley VMM typically has nothing to do with kernel scheduling. Should we entertain this idea,  the prediction of when memory resources should be freed for other proesses is probably left to some kind of cache policy manager. That's a big research task! For now the working-set is our best bet for temporal and local cache prediction without requiring us to think too hard about when to actually free memory with a scheduler. We can easily reclaim memory from the LRU working-set pages from any process, or randomly if we want pseudo-LRU characteristics.