You've probably been looking at disassembler output. ildasm puts these labels in front of each instruction, even if they're not used. They're not a type of hint or anything, just a jump labels. You also may call them anything you like.
I had similar ideas in mind since some time. My idea was to split system32 into two folders. One system-specific part and an user folder, where all apps slap their DLLs into. The system-specific part would be under complete control of the OS, applications incl. the administrator would only have read-only access, whereas Microsoft signed patchers and all could write to it. Naturally should there be a special case for the admin, to allow him read-write access on request.
Preferably that scheme should apply to everywhere, including the registry, services and what not. In addition to promote some more security, cleaning up an installation could be reduced to something like "rmdir \windows\user_* /s".
Too bad this will be next to impossible to implement (in a timely fashion and without breaking compatibility here and there)
A schema defines a datastructure. IIRC each schema generates a table in the WinFS store where the data will be stored in. If you're going to create a Contact and store it, all info like Forename, Name, etc. will be stored in that table. Future Longhorn applications are supposed make use of such schemas to store data in WinFS. Every piece of information will land inside the WinFS store, except for data that goes into fields with a specific datatype called varbinary(max).
The Contact schema for instance has no fields of that type, thus it will reside completely in the store. I don't have the structure of the Image schema at hand, so let's create our own (simpler) one. Assuming we've an Image type that has fields for Author, Date, Location, etc. that are all varchar(256), all the data filled into these fields will reside in WinFS. But an image also contains compressed image data. You'd need a field called e.g. ImageData that will host the compressed data. Now you have either the choice of using the image datatype (handles binary data, it's not tailored to images), which will cause the image data to be stored in WinFS as BLOB, or the varbinary(max) datatype which will act similar to the image datatype, but cause the data to be stored in a filestream on the disk. Both cases would work fine. However if you were to consider performance issues, the possibility of the image data to be changing a lot and/or the need of random I/O, you'd need to make use of filestreams instead of BLOBs. Latter ones don't offer any real flexibility.
In short, it means, that data will only be slapped onto the NTFS part if fields in a schema make explicit use of filestreams. So, it's not JUST a metadata store.
There's naturally also a simple File type in WinFS, whose purpose is to manage files inside WinFS which the file promoter couldn't recognize (JPEGs or MP3s are standard formats, but what if you copy some file with obscure format into WinFS? Thus you need an ambiguous type).
One thing for sure is that, at least in development mode, maintenance is much easier with files in the file system space than raw disk partitions. I can easily shutdown the server copy the files to a new place, restart the server, attach those new files as a different database in like 2 minutes.
Good point, though these types of scenarios aren't likely on multi-gigabyte databases
Running out of diskspace happens either way, on NTFS and on raw partitions. After looking some more into it yesterday night, SQL Server is well aware about the free space on the raw partition one way or another, since it maintains a bitmap about free and allocated datapages. A raw partition is more or less equivalent to a datafile created with the maximum size on the disk.
The device abstractions happen below NTFS in the logical volume manager, where a "WinFS2" would run on too. Also error correction is a feature of SQL Server, defrag could be possibly implemented too, basic backup is available in SQL Server, and well software RAID is as said above a lower layer thing.
One reason I'm probably nitpicking on this is that Microsoft made it look in the past like they were indeed dropping NTFS, maybe it was even intended, but then pulled out of that one shortly before or during PDC last year.
WinFS doesn't JUST store metadata. While it may apply for old filetypes that you just promote into the store, it's not true for WinFS native schemas and future 3rd party ones. Unless you define varbinary(max) fields in your WinFS schemas, it won't spawn a NTFS filestream for the respective item types and the data associated with the fields will go into the table created by the schema.
About the security, if you add filesystem semantics to WinFS, what speaks against adding security? It's not a NTFS only thing. The security also only works, aslong the filesystem is hosted in it's native environment. NTFSDOS for instance bypasses security, so your ACLs wouldn't mean anyhing in that case. For that matter, WinFS will get item-level security anyway, somehow you have to prevent other users to access your data within WinFS.
And BLOBs would only work for unstructured data, which size won't change. Like email bodies and all that.
I think I can tell you why noone uses that approach, because a lot are probably not aware of it. I never scrolled down to the examples in the SQL Server online books (at least not for CREATE DATABASE), where it's actually mentioned, and I think a lot of other people probably skipped that too. Some DBA told me sometime ago that it'd be impossible, so it was case closed for me that time. The bunch of books I read about SQL Server in the past didn't mention it either. So I learned something new today. Gotta try it as soon I get my RMA'd disks back.
About WinFS, I really hope they add real filesystem semantics to WinFS past Longhorn and pull NTFS out of it. Somehow the current approach looks like some evil necessary to me.
That's something I'm wondering for some time now, I haven't been able to find a short but clear explanation for this one:
Why is it that SQL Server needs its data- and logfiles residing on a NTFS partition? Wouldn't it be wise to allowing it additionally to the traditional way to create the datafiles straight onto the raw disk? For instance the datafile onto a big RAID5 array and the logfile on another smaller RAID1? Removing NTFS would lower the overhead and maybe lead to a bit more performance. At least logic would tell me that.
This would also affect WinFS, since it's SQL Server based. Why not allocating e.g. 256megs (or some percentage) of a raw partition as logfile and the rest all for data? OK, there's the new thing with the filestreams now (NTFS needed atm), but that could be solved by merging filesystem semantics into WinFS/SQLServer, by means of an internal bitmap that tells the FS driver whether it's a datapage or a file system cluster.
Thanks for any answers.