Mike Dimmick

Mike Dimmick Mike Dimmick

Niner since 2004

I work in a small ISV in the Thames Valley, specialising in mobile applications.


  • How To: Tell Vista's UAC What Privelege Level Your App Requires

    Note that you don't need to edit the project file to add a Win32 resource to a C# project. You can do this through the UI. Go to the Application tab in Project Properties, then under Resources, click the Resource File radio button. Enter the path to your .res file in the edit box or use the browse button to locate it.

    If you still want your application to have an icon, you need to include it in your .rc file. You'll want something like:

    1 ICON DISCARDABLE "myicon.ico"
  • Windows, NT Cache Manager - Molly Brown

    You can see what's going on in the cache currently using the kernel debugger's !filecache command. This seems to work OK with local kernel debugging on XP.

    You will always see that 'current size' reported by !filecache is smaller than Task Manager's "System Cache" counter. That's because the System Cache counter includes, IIRC, the complete size of all standby and modified lists in the system, regardless of which working set they belonged to. It is true that, in effect, this is cached data - it takes only a single page fault and setting the PTE in the page fault handler to add it back to the working set, rather than having to go out to the disk (a soft fault rather than a hard fault). Again, IIRC, System Cache is actually the system working set size and also includes the current physical size of pageable system and driver code, and paged pool (system- and driver-allocated data that is not needed at high IRQLs).

    I believe the standby and modified lists are actually double-counted, being also counted in the Available counter in Task Manager. That might explain why, on this 1GB machine, I apparently have 527,600KB Available and 584,640KB System Cache. !filecache reports a current size of 116,228KB.

    What's puzzling me is why NTFS's transaction log ($LogFile) is being reported as having 28,064KB on Standby or Modified lists, making it the third-biggest consumer of file cache (after the master file table, $Mft, and the Software registry hive). Surely NTFS isn't going to need to re-read log data once it's committed to disk?

    (It's understandable that $Mft is the largest user of cache once you realise that small files and directories are kept as resident attributes in their Master File Table record. This effectively hides those small files from the cache manager, which is a good thing, as the cache manager can only map multiples of 256KB blocks.)

    (It's also understandable that the Software hive is the second largest user, as it's being accessed all the time by Explorer and by applications. XP and 2003, IIRC, use the cache directly - there is no other copy of the hive data in memory.)

    Not a system-level developer, just interested! This information is from my recall of Windows Internals, 4th Edition (the updated version of "Inside Windows 2000").
  • Windows, NT Cache Manager - Molly Brown - Part II

    codan wrote:
    It was said that the cache manager's lazy writers may take ~8 seconds to flush dirty pages to disk. What measures, if any, are in place to protect unflushed cached data during power outages or hardware failures?

    None, for user files. If you want to force a write directly to disk, use FILE_FLAG_WRITE_THROUGH when opening the file, or call FlushFileBuffers. You need to specify FILE_FLAG_NO_BUFFERING as well if you want to be sure that the drive itself does not buffer the operation.

    NTFS logs all operations it performs to filesystem metadata. It keeps track of the latest operation to affect each page of the cached metadata. The memory manager is instructed not to write modified pages (by placing them on a ModifiedNoWrite list) if the log entries for that page have not yet been written. Once the log entries are written (log entries are batched up), the pages affected by those log entries are moved onto the Modified list and can now be written by the lazy writer. If the changes don't make it to disk before the power fails, NTFS can reapply the changes at boot time by redoing the operations recorded in the log.

    NTFS only guarantees that the file system is in a consistent state. It does not guarantee that the file system is in the latest recorded consistent state. If you need a stronger guarantee you need to battery-back your system or otherwise provide a backup power supply.

    Longhorn is apparently to add transaction support to NTFS, partly so that user data can also be protected by logging (also so that full transactional commit/rollback semantics can be applied). This is however an opt-in feature and is likely to be a bit slower than regular disk accesses. You have to trade off speed against the slight risk of power failure.

    Any greater detail is difficult to go into here. To truly understand the cache manager, you really have to understand the memory manager as well. Windows Internals 4th Edition spends 110 pages on the memory manager, and 45 pages on the cache manager. It then has 58 pages on NTFS.
  • Windows, Part I - Dave Probert

    Beer28 wrote:
    IOn linux on x86 user2kernel calls for IO to devices through the kernel are done by calling a CPU interupt instruction 0x80 with type of kernel sys function in eax, then the params for the kernel function in ebx-edx like you would do a fastcall from VC++, except with an INT instruction, not a call/ret to the start addy of the function,

    Does it work the same on NT?

    XP and 2003 use the SYSENTER/SYSEXIT instructions. IIRC earlier versions of NT used interrupt 0x2e. The user->kernel transitions are isolated in NTDLL.DLL apart from some places where gdi32.dll and user32.dll call into the win32k.sys driver directly.

    In fact it appears that the system call instruction might be dynamically generated! NtWriteFile, for example, loads edx with the contents of SharedUserData!SystemCallStub then performs an indirect call to that address. Since this is an Intel P4 system it uses SYSENTER.

    The arguments appear to be retrieved from the stack directly, the only arguments passed in registers are the user stack pointer (passed in edx) and the system call to execute (passed in eax).

    I would expect x64 and Itanium to pass parameters in registers rather than on the stack, since their calling conventions are register-based.

    Beer28 wrote:
    Is it always passed through the registers or can you pass stuff with pointers to memory from user2kernel and back without drawing an access violation(like maybe stack address space)?
    Once it passes the context switch to kernel mode the page protection is gone at ring 0 right? It's up to the kernel exported function to make sure you're not giving it a bad user space address(passed in the ebx-edx register) in NT also?

    Page protections still apply in ring 0. One of the bits in the Page Table Entry is the User/Supervisor bit, which governs whether a page is writable from user mode or from supervisor/kernel mode. On the x86, code running in rings 0, 1, or 2 can access supervisor and user pages; ring 3 can only access user pages (a processor will raise an access fault if it tries to access supervisor pages).

    NT breaks, for each process, the virtual address space into a user region and a system region. The split point is normally at 2GB (first system address is 0x80000000), however if the system is booted with /3GB that changes to 3GB user, 1GB kernel (first system address 0xC0000000). Finally XP and 2003 also offer the /USERVA switch which when combined with /3GB allows the system address start point to be tweaked further.

    The system address space is identical across all processes. Because the page tables are the same after the user/kernel transition (a user/kernel transition is not generally termed a context switch - the same thread is running, only now it's using its kernel stack, and it's running at a higher privilege level), the system code can access anything in the user-mode part of the address space that the thread's process can.

    Interrupt-handling code can, and will, be called with arbitrary process context - the process of whichever thread was last executing. It can't therefore write directly into a user-mode buffer. Instead it must queue an Asynchronous Procedure Call (APC) to the thread that initiated the I/O. When the APC is dispatched Windows performs a context switch to that thread, so now the correct process page tables are referenced and the operation can go ahead. (I've left out Deferred Procedure Calls [DPCs], which also occur in arbitrary process context).

    There are some threads in the system which don't run in a particular process's context - they're worker threads. Instead they run in pseudo-processes, which in Task Manager (and Process Explorer) are shown as "System Idle Process" and "System". The "System Idle Process" contains only one thread, which is the zero-page thread. This thread has the lowest priority in the system, does not get dynamic boosts, will never pre-empt any other thread, and is responsible only for zeroing out free pages. When it doesn't have any work to do it halts the processor. All other worker threads run in "System".

    The Structured Exception Handling mechanism is also supported in kernel mode; drivers should always wrap accesses to user-mode buffers in __try/__except blocks.

    At this point I have to confess I've done no kernel-mode programming. I've found out all I have from "Windows Internals, 4th Edition" (and its predecessor "Inside Windows 2000"), and from OSR's NT Insider.
  • Neal Christiansen - Inside File System Filter, part II

    rhm wrote:
    I'd like to see a video of kernel debugging in action using that windbg that was mentioned. You know, just to see how hairy it really is in there.

    You want to debug your kernel?

    Download WinDBG from https://www.microsoft.com/whdc/devtools/debugging/default.mspx. Recent versions running on XP or 2003 (I don't think this was supported on 2000, but could be wrong) offer a 'Kernel Debug' option on the File menu. Select the Local tab to debug your local machine. You have to run WinDBG using an administrative account - after all, it wouldn't be good for security if ordinary users could debug the kernel!

    Local live debugging is a little limited. For full control you need to run the debugger on one machine and have a separate machine to debug. Currently you can use a serial connection (pretty slow) or an IEEE1394 connection (fast). There's also kernel-mode remote debugging but you can't debug boot-time with this option.

    (Bad UI, guys! I don't expect a tab to control which option I'm using to connect to my kernel - I expect a set of option buttons).

    I'm only a user-mode developer but I keep WinDBG around because it is more powerful than Visual Studio (although newer versions of VS are getting closer) and it's more lightweight. I recently solved a problem in a VB6 app by compiling the app with debugging symbols on, calling DebugBreak explicitly in a test version at the point where I knew the error occurred, and running the app under WinDBG on all machines in a load-balanced cluster. When the error occurred the app broke into the debugger, I dumped the stack, and worked out what had gone wrong.

    As always you need a guide to kernel mode. Windows Internals 4th Edition is probably good. I don't have this yet, but I do have the previous edition, titled 'Inside Windows 2000' by the same authors.
  • Tony Goodhew - Planning the "​Orcas" version of Visual Studio

    About the Chris Sells video: I got a link to it too, from the feed. It was an old one from way back in April. Sometimes the feed gets a whole bunch of old videos in it.
  • Stephen Toulouse - What does ​"​responsible ​disclosure&​quot; mean to you?

    Maurits wrote:
    That's a good - well, interesting - argument for not fixing a standards-incompliant piece of software.  It leaves open the question "why didn't they make it standards-compliant in the first place?"

    Usually because the 'standard' was written after the design was frozen.
  • Scott Currie - Multiple language programming demo

    If you want to do this in 1.x, try ILMerge. Much easier than the alternative, decompiling all the modules with ildasm then building a single binary with ilasm.

    I was very disappointed when I found this tool - I'd had that idea too but failed on the actual execution Wink

    I assume that partial classes must be written all in the same language - you can't have one part of a class written in VB and another in C++?
  • Stewart Tansley - Take a tour around Microsoft Research faculty summit

    GaryBushey wrote:
    Very hard to watch this one due to the choppy audio.  Even the downloaded version has issues.

    I think it's been run through a blank-cancelling system - frames where there's no or a low-level of audio have been cut out. You can see that the movement of the 'toddler' robot towards the end as Stewart picks it up and puts it back down is jerky.

    Don't do this again, guys, an extra 30 seconds isn't going to kill us!
  • Zoe Goldring and Gretchen Ledgard - What is it like to interview at Microsoft?

    phunky_avocado wrote:
    We recently had someone apply for a senior development position.  She asked as a minimum salary of $150K!

    An experience I will not forget is having a Microsoft recruiter essentially laugh at me during a phone interview when I answered the 'salary expectation' question for an SDE position. I believe her words were along the lines of:

    "We pay our administrators more than that."

    I'd asked for about 30% more than I'm earning now... I suppose I could have made a horrible miscalculation in the currency conversions, but I don't think so.
  • Mike Hall - Why are there so many operating systems?

    raptor3676 wrote:
    Well I see a number of features that I like to see in the Desktop version of Windows...

    Mike, said that Windows CE is Real time and that its kernel footprint is about 200Kb and componentised. Can you imagine if Windows XP where that way? we could have our PC turned in to speed demons.

    CE makes a number of size/speed tradeoffs to get that small. Read up on CE's memory model at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncenet/html/advmemmgmt.asp. I'm a Pocket PC developer, mainly, and we keep hitting that DLL load problem.

    CE 3.0 and earlier don't build a complete set of page tables that the processor can access automatically, when encountering a virtual address it doesn't have in its Translation Lookaside Buffer. The TLB is a special hardware cache which maps a virtual address to a physical address in constant time. The x86 and ARM processors' memory management units can fill the TLB in hardware without raising a software interrupt, but this feature wasn't used, partly because MIPS and SHx processors don't offer this feature. The result is that the processor raises an exception (page fault) every time a page is accessed which hasn't been accessed recently. CE 4.1 (IIRC) and later support this feature and get something like a 20% speed improvement over previous versions.

    The other impact is of course that CE doesn't offer anything like the feature set of XP. XP's GDI offers world transforms and multiple mapping modes, allowing you to draw using different co-ordinate systems and the OS to perform (most) graphical scaling. CE only offers 1:1 mapping between GDI co-ordinates and screen pixels. CE drops a large number of drawing APIs: where XP has MoveTo, LineTo, etc, CE only offers Polyline.

    CE doesn't have much of a graphics acceleration API, essentially only allowing hardware-accelerated blits (bitmap copies to the screen). XP's graphics stack allows the hardware to claim accelerated support for a complicated operation, then call back into GDI to perform some portions of the operation in software and divide it into simpler operations that can be accelerated. This back-and-forth nature allows virtually the same output to be produced on very different hardware, but it can be slow, which is why DirectX exists (which is all-or-nothing - either all accelerated, or all emulated).

    I actually wonder if CE's days as the basis for Pocket PC and Smartphone are numbered - the requirements of those devices are drifting away from the requirements of a hard real-time embedded system.
  • Chris Anderson - Compares XAML to HTML and CSS

    XAML isn't like HTML. It only describes the markup syntax - it does not describe the object model. Instead, you can supply whatever object model you like. The examples Chris gives are obviously from the Avalon object model.

    So adding a third-party control is pretty simple - you add a reference to the DLL that control is supplied in, then bind to those objects in your XAML markup.