, evildictaitor wrote

*snip*

Nope. Here's a program that works markedly better on a RAM disk:

FILE* fp = fopen("Z:\\foo.txt");
BYTE buf[1024] = { 0 }; 

for(int i = 0; i < 100*1000; i++)
{
  fwrite(fp, &buf, sizeof(buf)); 
  fflush(fp); // this forces disk-io if you're on disk, and doesn't if you're on a ram-disk - it can't be buffered in the filesystem cache because fflush is an IO-commit event.
}

fclose(fp);

That would be one of those pathological cases of unrealistic behaviour that probably favours the RAM disk. And I say probably because you could theoretically still hit the scenario in which memory is tight enough that writing to foo.txt causes page faults in the RAM disk scenario (and thus read/writes to disk) but doesn't in the standard disk scenario which only ever needs to write updates to disk. You can find benchmark type tests that prove pretty much anything, they're rarely worthy of consideration.

Another example, perhaps more useful example is

robocopy Z:\files Q:\files   will run much faster if Z and Q are RAM disks compared to real disk, because no IO is involved - only memory accesses.

If the best example possible is copying files, I believe my point has been proven as it again is a very edge case example - you don't normally copy files for no purpose (backing up might constitute the purpose if they weren't RAM drives).

I'll discount the grep example too because, as you say, it'd be borderline stupid to keep something like source code on a RAM disk. And before someone suggests it, if you've copied it there from a fixed disk first you've already paid the price that getting it into the filesystem cache would require.