Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements


Bass Bass I need better writers.
  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    So really, I'm really going all weapons blazing against SQL DBs. I've just have a lot of bad memories. I'm largely fed up with the whole idea of SQL in general, come on, this bizarre language in a language that we use. And really the whole relational model which does not fit in with the modern way of OO development.

    I know people might be like "what about complex queries", but MapReduce is a legitimate answer for that. Also I feel like it is a lot easier to wrap your head around what MapReduce code does then some complicated SQL query. MapReduce scales horizontally too, and is arguably very flexible with the type of questions you can answer with the data. And if you know, stuff you always need around, maybe you don't want to run a MapReduce query over and over. You can accumulate on the fly, simply store it as another document in the DB.

    When SQL stops being this major thing, life as a developer will be a lot better! You will get more sleep and live a happier life just by avoiding SQL. I'm not even joking. :) Consider the * health benefits of not using SQL. :D

    I can probably keep going on forever on this, but I'm venting enough I think.

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    Also just throwing Redis out there too. If you something to even dominate MongoDB in performance, it's going to be really hard to top Redis. Redis could probably turn a netbook into the equivalent of a $500k database server. The difference is Redis is literally a memory backed K->V DB.

    It will lazily store parts of your DB on a HDD. But, the DB has to fit in memory. Your DB has to literally be able to fit in memory or you'll have a bad time.

    Don't expect too much relational capability (MongoDB has plenty), but it can support fairly arbitrary object models relatively easy. This is the thing you want if you need to store and process metric tons of data quickly, and you have a lot of system memory available.

  • 100 million downloads of Apache OpenOffice

    Yeah I use Google Docs mostly but even that is rare.

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , magicalclick wrote


    Let's not talk about this forum. Seriously, this forum is the worst. It shared the same User Agent failure as many crappy sites. The baseline should have been BASIC multi-browser HTML5 for all browser modes. The buggy text editor should have been disabled by default in the baseline and only enabled to supported browsers and modes. Even if the user agent is completely missing, it should serve the non-buggy text editor in basic HMTL5 format. Not anything weird, because the random user agent is likely a new browser. YouTube shared the same fail, but, I believe it is deliberate, forcing IE users to use fake Chrome user agent.

    Anyway, I haven't seen a web developer make a correct browser logic. They tends does this long list of conditions that always fails when a new browser comes out.

    I know. But it's not just Channel 9. It's an endemic problem across web apps. You can say it's bad coding (and it usually is), but the fact of the matter is ACID compliant DBs are just slow. They are so slow, that they are usually the bottleneck in complicated PHP web apps, to the point that many people say "who cares if your code is slow, your bottleneck is the DB anyway". And they are right. PHP is slow, and poorly written PHP is even slower, but it is no match for the slowness of the DB. So people use memcached (Channel 9 does for instance) to try to make this fast but it fails hard for write operations. 

    I know someone will be like well magic ACID DB I use is super fast. Well bullshit. It has nothing to due with the software. The slowness entirely comes from the fact that ACID-compliant DBs typically have to wait for a process involving a physically moving part to complete before they can return a success code. This process is measured in milliseconds. Milliseconds. There is no voodoo you can do that make that fast. I'm not even talking about Google scale. Once you are reaching maybe 20-30 concurrent writes, you are already going to have a bad time. Even if you buy some $100k DB. It's physics, not something Larry Ellison's magical expensive code can fix.

    And you know what. That's not big data, that's moderately popular website. And that's why so many websites struggle with this. Yeah I know, someone is like, but I only read from my DB. Well sure, but that's no good with Web 2.0 and stuff to not write anything to your DB.

    System memory is of course, orders and orders of magnitude faster. If you can utilize that more for writes and not just reads, you are literally about orders or magnitude more I/O throughput, without having to change the underlying hardware. Sure, you trade some durability - but you know, it's rare for these DBs to crash anyway, especially now that they aren't as overwhelmed. It's a total shift in thinking, but wow, it really makes a difference. Now when you are talking about these memory backed DBs, a monkey could write one in Brainf**k and it will outperform the most optimized expert-made ACID DB.

    Yeah it's a tradeoff. Almost everything in CS involves tradeoffs. But this is one of those tradeoffs where you gain a major quantity of something for losing minor quantity of something else. Maybe you don't want to make that tradeoff for like nuclear control system or something, but it's rare when you want to lose orders of magnitude more write performance for a little bit of increased durability (never 100% of course).

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    , figuerres wrote


    well most accountants I have ever dealt with would find random losses of transactions to be kind of a "BIG DEAL"   -- like , so it's ok if we lose your paycheck ? you do not mind a "small Loss" every couple of weeks ?


    It's a small chance, as in, if the server crashes, that some limited amount of data written but not synced to the hard disk would be lost. For this, you gain significantly more responsiveness and availability probability. With Mongo and other DBs it's up to you to decide if this tradeoff is worth it.

    But no DB can truly guarantee that you won't lose data, because that's impossible. Possible data loss is the nature of living in the Universe. It's fair to say that no DB actual has perfect durability: give my a SQL DB that can prevent against natural disasters for instance. Even when you do backups and backups, you are just reducing probabilities, you are never eliminating the possibility of data loss. So let me ask you this, are you doing everything you can do eliminate data loss? I promise you, you aren't. Because you make X replicas, X+1 replicas will always be probabilistically better..

  • 100 million downloads of Apache OpenOffice


    I'm not really an OpenOffice/LibreOffice user, but apparently some people are. This doesn't count Linux distros that bundle it, as well as LibreOffice which is a popular fork of it.

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    Server Error in '/' Application.

    Runtime Error

    Description: An exception occurred while processing your request. Additionally, another exception occurred while executing the custom error page for the first exception. The request has been terminated.


    Remember guys, SQL reliability for the win! Channel 9 total never randomly loses your posts because it's using SQL on the back end!

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?


    Sorry, not true. Durability does not matter for forums, blogs, or pretty much the entire Internet sans banking, and I question that as well (small chance of a transaction going missing? if that average loss from losing a transaction is only in thousands of dollars, is that really a big deal? Arguably, the massive performance gain and lower hardware costs write buffering gives you is still more valuable then protecting against the incredibly tiny probability of lost transactions.). As RealBoy mentioned, apparently there are lots of financial services these days using MongoDB and might not even enable journaling. Not surprising. Even more interesting: how about high frequency trading (talking about potentially billion of dollars in transactions per second)? Do you think they wait on a sync call to SQL DB to document their ridiculous number of trades? So that their competitors can come and beat them to it? Don't think so.

    Do you what matters though? Availability! Responsiveness! ACID guarantees reduce these things significantly.

    I'm just talking about durability. Many web services using SQL backends throw away the idea of a consistent global state so they can do cheap caching. For instance Wikipedia, which is MySQL backed. Instead of doing a DB call on page request, it usually sends you an already rendered page from a geographically local cache. This will not necessarily show the most updated version of the page to you, because the cost of guaranteeing that globally while still having a responsive website requires hardware that hasn't been invented yet. The DNS system is of course, a hugely important system to the functioning of the whole Internet. It's also a really notable example of eventual consistency in action.

    It's funny though, even websites (like Channel 9!) who have SQL backends will regularly just drop posts when their DB is overloaded. I get some random exception, and bam, my post don't go through and if I hit back: it's gone. This happens on so many websites. Who gives a * if the database is "durable" when the website crashes while you are trying to use it, losing your state anyways?

    All this is moot anyway in MongoDB, because it supports durability as an optional configuration option. It also inherently has consistency for single requests (and can take that further then most SQL DBs) and as long as you aren't sharding. It just doesn't give the illusion that any of this is free. Yes, hitting the disk on every write is pretty expensive. Often the first thing a MongoDB newb does is enable journaling even if they don't usually need it. But sure if you have some blog with 100 views per day and your web server is only in I/O wait state for 1% of it's available time, why not? Just hope you never get on the front page of Slashdot or Reddit. :)

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    Another thing to add. Every time I go to a forum or random website and the whole thing crashes with mysql_* errors (which hilariously, is often), I think, "well at least they aren't losing any data ['that was accepted' :)]". It's not a bug, it's a feature! Total win on that developer's part!

    At least he can know MySQL is hard at work prioritizing against the 0.001% chance it might lose some important data, like people's last 30 seconds of blog comments. Anything else is totally like writing to /dev/null!!

  • So is SQL Server 2014 in memory Hekaton gonna crush nosql?

    I just want to add that too much of database design is all about these weird corner cases. Well, what if my database server explodes? Well holyshit man, that's a problem. Better code that condition into our database. These SQL DB developers need to design all these random features to keep our data safe from supernovas and *, they forget to design the database for the most simple use case: storing and retrieving data.

    Literally, I can not store data in a SQL database without either using some retarded COBOL inspired language that can not function well with the rest of my code base, or use some bizarre hacky mess framework called an ORM that literally you have be a masochist to enjoy working with. Just to do the most basic of operations. This is why SQL databases are garbage salad.

    I was a SQL true believer once. Just a few years ago. When I was first made aware of MongoDB, which I never heard of, I was like lets do a YouTube search. And well f**k, the first result was that "MongoDB is web scale". It was hilarious. MongoDB is like writing to /dev/null! Ha ha! Stupid hipsters! I was like showing that video to everyone, lol how stupid people must be for not using the tried and true MySQL DB with it's built in nuclear apocalypse mitigation. *puts on sunglasses*

    Until I started actually using MongoDB. It was like the best thing that ever happened. Holyshit, you can write stuff to a database simply by calling "save" on the object as is? I don't need to design a schema or work with ORMs or SQL or all that nonsense? I just call a single method? Holy crap man, I've been missing out on this all this time? After awhile of using NoSQL, it became obvious that developers who like SQL are (1) ignorant of the options, (2) completely batshit insane. There exists no alternate possibilities.

    That's the reason why NoSQL is getting huge. Because, developers are becoming enlightened to it. And that is totally awesome.

    The End.