, AndyC wrote

At best, it'll be the same

Nope. Here's a program that works markedly better on a RAM disk:

FILE* fp = fopen("Z:\\foo.txt");
BYTE buf[1024] = { 0 }; 

for(int i = 0; i < 100*1000; i++)
{
  fwrite(fp, &buf, sizeof(buf)); 
  fflush(fp); // this forces disk-io if you're on disk, and doesn't if you're on a ram-disk - it can't be buffered in the filesystem cache because fflush is an IO-commit event.
}

fclose(fp);

Another example, perhaps more useful example is

robocopy Z:\files Q:\files   will run much faster if Z and Q are RAM disks compared to real disk, because no IO is involved - only memory accesses.

grep Z:\mysourcetree\ strcpy will also find vulnerable functions in a large source base much faster too, although I'd question who would be brave enough to have their source tree on a RAMDisk instead of something that will survive a power-cycle.

Don't get me wrong - RAMDisks were deprecated for a reason; but they were invented for a reason too. For all sorts of operations they will be faster than the file-system cache - it's just that for an average user using a RAM-disk is probably worse than not using it.

The inexcusable myth, however, is putting the pagefile on a RAM disk will always have a strictly negative performance impact on your system.