Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Discussions

figuerres figuerres ???
  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , bondsbw wrote

    @spivonious:  It all depends on the situation (like I said earlier, not every tool is a hammer because not every fastener is a nail).

    Say you have 500 tables in your DB.  Say one of them is a root table, and the primary purpose of this database is to store some very large and complex set of data that is built off the root.  Say you also need to get that complex set of data back... how is this done for each?

    • In a relational model, this will be a massive and potentially very complex SELECT statement with all kinds of JOIN clauses.
    • In an object model, just ask for the root and you have everything.

    That is just one class of problem.  Relational excels in cases where you need to do reporting on this data.  And frankly, either mechanism will ultimately work because (again as I posted earlier) Relational and NoSQL are duals.

    right, if the specific case is one where relational does not work well then for sure I would never try and force it to fit.

    in fact MS SQL has ways to handle cases like this,  put a bit of tracing data in sql and use the filesystem links to store the "Blob" of non relational data on the file system and do what you need to with that chunk of storage. 

    in such a case I would only use sql for a some basic lookup and to track the native object list *if* doing that was helpful. if it was not then I would not force it.

    I re-worked a game app one time where the OP used sql when it was 100% not needed.  it was dumb as heck and made installing the app a pain.

    used an in-memory object model and wrote some csv files to do reporting at the end of a run for the customer and the result was a faster, smaller app that did what they needed.

    like you said it's not just nails and hammers. use the right tools for the job.

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , spivonious wrote

    @figuerres: Yep, nothing is going to fix bad code.

    Am I the only one who finds writing SQL very natural? The relational model makes a lot of sense for most business purposes. I also find that it ties into OOP very readily. Child tables turn into "has-a" relationships. Flags on a table turn into "is-a" relationships.

    I am in, I do find some cases where the SQL to OO mapping can be a trick but for a *LOT* of cases I have no problem.

    Just as when I have a case where I do not need / can no use a DB I find it very easy and natural to use LINQ to objects and a bit of code to make my own app specific "NoSQL" system.

    and I just got done with a project where I had to pull a bunch of data from some hardware and used LINQ to manage lists of data in memory, and the data that went in the lists had a lot of good OO design in them.

    I think that if you have the skills and know how to think it out that most  the time this stuff does not need to be that hard.

    I think that we have a few cases of folks looking to the 5% or less of corner cases and making it sound like they are showing the world a huge y2k problem that's going to change the world.

    while 90% of the development world marches on w/o a problem...

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , AndyC wrote

    *snip*
    It's not just about losing data if a server crashes, it's about not even adding up the numbers consistently. Pretty sure you'd notice real soon if the multiple payments going in and out of your bank didn't add up at the start of the month.

    And there has been at least one bitcoin repository that lost millions of dollars worth of peoples money because they utterly failed to implement transactional consistency correctly, something that would have been a non issue with SQL. Which is also why all those trading floors are, in fact, pumping millions of transactions through SQL, because the result of any one of those just not adding up is financial ruin for someone.

    LOL.....

    A few years back I replaced a system that :

    1) used float data types when it was money.

    2) did not use transactions

    3) did not used procs

    4) had sql strings in the client software, many clients connecting direct to the server.

    it managed customer account balances and at peak work load times the balance would go up and down in ways that allowed orders / withdrawals that in fact were past the balance for the account and caused loss for the business in real $$$$

    AND had rounding and truncation errors when computing totals esp on factions (think interest calculations going wrong)

    Yeah I and then I hear folks saying that "laziness is a virtue"  ... well if you have the skills to know where and when you can that's one thing but the folks who write code like that are creating work for me on one hand but also making a lot of folks not trust developers to know how to write good code at the same time and I am not sure I want that part of it.

    so be a fool and write crap and then I get to do the job a second time for the customer and show that that not all developers are morons.  Accountants and business owners like me work cause they know I do not do that kind of crap with the numbers they need to run the business.

     

     

     

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , Bass wrote

    @AndyC:

    Sorry, not true. Durability does not matter for forums, blogs, or pretty much the entire Internet sans banking, and I question that as well (small chance of a transaction going missing? if that average loss from losing a transaction is only in thousands of dollars, is that really a big deal? Arguably, the massive performance gain and lower hardware costs write buffering gives you is still more valuable then protecting against the incredibly tiny probability of lost transactions.). As RealBoy mentioned, apparently there are lots of financial services these days using MongoDB and might not even enable journaling. Not surprising. Even more interesting: how about high frequency trading (talking about potentially billion of dollars in transactions per second)? Do you think they wait on a sync call to SQL DB to document their ridiculous number of trades? So that their competitors can come and beat them to it? Don't think so.

    Do you what matters though? Availability! Responsiveness! ACID guarantees reduce these things significantly.

     

    well most accountants I have ever dealt with would find random losses of transactions to be kind of a "BIG DEAL"   -- like , so it's ok if we lose your paycheck ? you do not mind a "small Loss" every couple of weeks ?

    wow....

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , bondsbw wrote

    @swheaties:  Amen.  This is really what I was talking about.  SQL is one of the only, and definitely the most popular, services where the most common API is strings.  

     

    what about json ?   that is also a pile of strings with some braces tossed in for fun but folks love that ....  

  • Need ​documentati​on for DNS Client settings in Windows 8.1

    here are a couple of  links that may help you, just had to use the right google search to find them.

    use with care as this is editing the registry and if you get it wrong all kinds of bad stuff can happen.

    http://drewthaler.blogspot.com/2005/09/changing-dns-query-timeout-in-windows.html

    http://blogs.technet.com/b/stdqry/archive/2011/12/15/dns-clients-and-timeouts-part-2.aspx

    I suspect there will also be a reg key that can turn the adaptive part off or on but you will need to see if that is true.  

    just took a look at my system and the key/value is not there,  a guess is that the adaptive does not need the old style settings key, possibly adding the key will give you the old style ?

    but that is totally a guess and may be very wrong.

     

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    I will have to first say that I know none of the details of the different nosql systems that are out there.

     

    that said I read post that talk about storing documents.... I guess I will have to see what folks are doing.  I know that *IF* you store a bunch of files on a file system that creates disk IO and you can have fragmented file systems or even if the file system is not fragmented the locations of files can lead to having the disk drive seeking all over the place to get different files.

    I am wondering what kinds of re-invention of stuff is going on.

    way back Novell had special code in NetWare to manage disk IO for seeking files.

    and some early email servers had scaling issues when to many mailboxes were stored in one level of folders ( OLD SENDMAIL / POP3 stuff from the mid 90's)

    are we creating a "database server" that manages IO and locking but just skips having SQL to help us with the logic ?

    So far I have not seen a huge reason for not having SQL servers  -- just a lot of "let's follow this trend cause it's what they did" 

     

     

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , bondsbw wrote

    *snip*

    If you have several gigabytes or terabytes of data, then I doubt all of your tables and indexes will be in memory.

    If we're talking about some tiny database, then what's to debate?  Most anything you throw at it will be fast enough.

    at my prior job I took care of a db with over 150 gigs of live data and the server had 32 gigs of ram.

    sure at some point the data is larger than the ram. sql does good job of handling that.

    I was just pointing out that you totally ignored the fact that not every sql query has to generate a ton of disk IO.

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , bondsbw wrote

    *snip*

    Relational tables are written to disk in pages.  If I have to join 100 tables to get one or a few records from each, that means I have to read from many pages on disk, and this can be slow to very slow depending on how fragmented your pages are across the disk (and of course how well you set up indexes relative to the particular query plan you are running).

    In a document database, all the data for a document is stored together.  So if there are N documents, in theory a document database would be around N times faster retrieving all the data for a single document.

    And "30x" sounds like a magic number.  I'm not disputing it since I don't know where it comes from, but it's probably either for an average query plan or the average across all common query plans or something else that isn't specific enough to have any idea how it would fare for a particular type of query.

    You seem to be assuming worst case, if the server and database are setup halfway right then the tables and indexes will be in memory; a more normal run will have no disk Io. and just some CPU time to make the result set. 

  • Using Microsoft FAXComLib on a client-​server ​configurati​on

    possibly you need to check here:

    http://www.interfax.net/en/dev/faxcomexlib/operation-failed#sub1