I need to all the 2GB tables because the list that's stored in the tables would be incomplete if I didn't send them.
It's not possible to change individual fields/recs for many reasons:
* the tables don't always exist on the recieving end
* on many tables, every record is changed significantly
* some recipients of the tables don't know how to merge records into an existing table
* others don't want to go through the hassle of merging records in their databases - they just want the changed files
SQL server can't do this because it's a huge hassle (SLOW) to import DBF files into SQL.
But I don't use FoxPro to write applications.
A typical FoxPro session (for me) looks like this (The commands I use are not predictable, and change as often as the data does.):
COUN FOR FRSTNAME <> ' '
REPL ALL LASTNAME WITH NLNAME
Frequently, I have to send files to other people. Many of these files are very large, which forces me to keep multiple tables lying around:
On the first three tables, I can't use MODIFY STRU to add a field to them, because they're too large. I have to waste time splitting the table up, giving me:
So now, I have to take the small number of commands I would have just typed once, and run them seven times! And I have to waste even more time compressing the files with a ZIP utility, etc.
Since I only need to manipulate the files for a day or two (and then they're not on my computer anymore), it makes no sense to try importing to SQL.
This is a horrible thing, and would be much easier to deal with if FoxPro were fixed!
I don't use indexes, transactions, or concurrent processes on the same table.
I do use foxpro to interactively manipulate large tables filled with text, though.
Sometimes I still use foxplus for DOS! The main thing that sucks is the 2GB limit.
Yes, but there are obvious approaches to getting around that.
See http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_STRUCT for information about DBF format.
One way is to simply use 8 bytes of data in the header to describe the number of records, stretching the header by four bytes. This would be OK, because the version number would be changed for this format, and old programs (that are written properly) would not try to read it.
If this approach is taken, it would be very wise to also lift the 10-character limit on field names. Padding the new format with extra reserved space would be a good idea.
I imagine that so many users want large file support that they're willing to buy a new copy of FoxPro, and new copies of any programs they use that read/write DBF format. The alternative is the giant, never-ending hassle of chopped-up data files that we have now.
Other ideas aren't so good:
A 'clever' approach would be to hide the extra 4 needed bytes somewhere else in the header. Compatability is hard to ensure here, so it really doesn't help all that much.
Another idea is to produce a standard header, but to pack extended 64-bit specific info into a special field definition. Compatability still breaks, and this one is just as ugly as the last idea.
FoxPro does not have support for files larger than 2GB.
I would gladly trade all of the new features from version 6 on for large file support.
Am I the only user that cares about large file support?
I think I've emailed Ken about this, and he said it's too difficult to do, and to use SQL Server instead. (Which doesn't work, because I use FoxPro interactively to manage mailing lists, which differ very widely in layout.)