I'm just wondering what's up with this.

I just set up a DB in 2008 CTP to store a massive amount of images. I'm using the FILESTREAM feature to store the binary data in the file system. I was wondering why my migration process came to a slow crawl after about 900,000 images and i was baffled as all the system and SQL server metrics seems to be just fine. after doing a little research i noticed people complaining somewhere about NTFS not playing that well with directories of that size and it seems to be the thing that is slowing my inserts down.

I also see that you cannot span the files into multiple folders since SQL server manages these files.

Why would SQL limit you in this way if NTFS cannot support such huge folders well?

If i'm missing something or there is a workaround for this somewhere i'd love to know cause i'm just shocked that this would be the case in software designed to support gigantic table.

Any thoughts, Ideas?