That's a pretty big if you have there. Files created with either the FILE_DELETE_ON_CLOSE or the FILE_ATTRIBUTE_TEMPORARY (on NTFS) are stored in paged-pool rather than on disk (although they appear to be on disk because of NTFS lies). This is effectively NTFS implementing a ramdisk for you in this case where previously a ramdisk would have meant a speedup. Putting files marked temporary on a ramdisk when NTFS just adds inefficiency, but putting files which aren't marked temporary on a ramdisk when you want them to behave as if they are temporary does give you a speedup.
And something like a search indexer that doesn't want to preserve it's index after a reboot (though I'm not sure why) would make far more sense to indicate that it's file is temporary. Forcing commits to disk is also something that seems unlikely if the overall consistency of the file on the filesystem is unimportant also seems rather more like finding an example that works than a real-world scenario.
If this is a general purpose database program (that you don't have source for), it will assume that on-disk consistency is important. If you happen to be using that general purpose database program to do very large numbers of operations that don't need on-disk consistency, you could either
a) Build (or shim) your own database program that is identical, but uses temporary files instead of the normal database files
b) Use the general purpose database, but put its database on a ramdisk.
c) Suffer the large number of disk faults caused by each and every commit (which is what most people do).