here are a couple of links that may help you, just had to use the right google search to find them.
use with care as this is editing the registry and if you get it wrong all kinds of bad stuff can happen.
I suspect there will also be a reg key that can turn the adaptive part off or on but you will need to see if that is true.
just took a look at my system and the key/value is not there, a guess is that the adaptive does not need the old style settings key, possibly adding the key will give you the old style ?
but that is totally a guess and may be very wrong.
I will have to first say that I know none of the details of the different nosql systems that are out there.
that said I read post that talk about storing documents.... I guess I will have to see what folks are doing. I know that *IF* you store a bunch of files on a file system that creates disk IO and you can have fragmented file systems or even if the file system is not fragmented the locations of files can lead to having the disk drive seeking all over the place to get different files.
I am wondering what kinds of re-invention of stuff is going on.
way back Novell had special code in NetWare to manage disk IO for seeking files.
and some early email servers had scaling issues when to many mailboxes were stored in one level of folders ( OLD SENDMAIL / POP3 stuff from the mid 90's)
are we creating a "database server" that manages IO and locking but just skips having SQL to help us with the logic ?
So far I have not seen a huge reason for not having SQL servers -- just a lot of "let's follow this trend cause it's what they did"
at my prior job I took care of a db with over 150 gigs of live data and the server had 32 gigs of ram.
sure at some point the data is larger than the ram. sql does good job of handling that.
I was just pointing out that you totally ignored the fact that not every sql query has to generate a ton of disk IO.
Relational tables are written to disk in pages. If I have to join 100 tables to get one or a few records from each, that means I have to read from many pages on disk, and this can be slow to very slow depending on how fragmented your pages are across the disk (and of course how well you set up indexes relative to the particular query plan you are running).
In a document database, all the data for a document is stored together. So if there are N documents, in theory a document database would be around N times faster retrieving all the data for a single document.
And "30x" sounds like a magic number. I'm not disputing it since I don't know where it comes from, but it's probably either for an average query plan or the average across all common query plans or something else that isn't specific enough to have any idea how it would fare for a particular type of query.
You seem to be assuming worst case, if the server and database are setup halfway right then the tables and indexes will be in memory; a more normal run will have no disk Io. and just some CPU time to make the result set.
possibly you need to check here:
It's got NoSQL vs. Relational and it's got Erik Meijer! No really, this paper should be a mandatory reading before diving into Relational vs. NoSQL discussions.
that link gives me a 404 not found, perhaps you can give the title and type of publication so we can find it.
To address what "Hekaton" brings to the table:
OK it's for sure cool that MS is addressing one of the key things that good database developers and designers have always had to work on - getting a server with enough memory and then getting the "working set" for a large db to fit in memory, that will *ALWAYS* help make stuff faster.
if they can help ease that it will be great.
compiling to native code: also good if they can better optimize the code that run in stored procs and the like.
I recall a few years back when a company told our CEO that it was not possible to use SQL server to handle the volume of data writes that an app this other company wrote needed.
I was like "Really? so what I built can not be built?"
I had setup our system and it runs a large number of inserts and updates and selects every day. it was just a matter of how it was designed and knowing a few tweaks to make in MS SQL server to tune things up.
I have seen more than one time someone write code for a database where the author did not understand the relational model and had possibly wrote some sql for a different database. the stuff they did - sucked.
I have had to in one case re-write a client app that had sql statements embedded in it to move the sql code into web services so that we did not have a hundred clients connection to a server over the internet.
I have also on one case had an app handed over to me that did not need a sql database at all. the author did not know how to manage some data in memory and also did not know how to manage picking random numbers.
I say this because I know there are cases where a lack of skills in working with a tool can turn into stories of how bad the tool is.
no it's not ....
sure speed is important but it's not the end-all-be-all
I suspect that we have a lot of developers who do not really "know" what SQL is about.
by that I mean that I am talking about the relational model and relational algebra and what a DBMS gets you.
there are a *LOT* of systems that use SQL RDBMS platforms for good reasons that the 'no sql' stuff just will not replace.
yeah if all you need is a fast web site and bit of data here and there it's all good.
but that is only one part of the world of data.
PS: MS "worked on SQL Server" long before 2005 -- go look at the history and when MS bought the Sybase Code that made the first version of the MS server... 6.5 to 7.0 was a huge change in how it worked, and then there was SQL2000.