3 hours ago, magicalclick wrote
Let's not talk about this forum. Seriously, this forum is the worst. It shared the same User Agent failure as many crappy sites. The baseline should have been BASIC multi-browser HTML5 for all browser modes. The buggy text editor should have been disabled by default in the baseline and only enabled to supported browsers and modes. Even if the user agent is completely missing, it should serve the non-buggy text editor in basic HMTL5 format. Not anything weird, because the random user agent is likely a new browser. YouTube shared the same fail, but, I believe it is deliberate, forcing IE users to use fake Chrome user agent.
Anyway, I haven't seen a web developer make a correct browser logic. They tends does this long list of conditions that always fails when a new browser comes out.
I know. But it's not just Channel 9. It's an endemic problem across web apps. You can say it's bad coding (and it usually is), but the fact of the matter is ACID compliant DBs are just slow. They are so slow, that they are usually the bottleneck in complicated PHP web apps, to the point that many people say "who cares if your code is slow, your bottleneck is the DB anyway". And they are right. PHP is slow, and poorly written PHP is even slower, but it is no match for the slowness of the DB. So people use memcached (Channel 9 does for instance) to try to make this fast but it fails hard for write operations.
I know someone will be like well magic ACID DB I use is super fast. Well bullshit. It has nothing to due with the software. The slowness entirely comes from the fact that ACID-compliant DBs typically have to wait for a process involving a physically moving part to complete before they can return a success code. This process is measured in milliseconds. Milliseconds. There is no voodoo you can do that make that fast. I'm not even talking about Google scale. Once you are reaching maybe 20-30 concurrent writes, you are already going to have a bad time. Even if you buy some $100k DB. It's physics, not something Larry Ellison's magical expensive code can fix.
And you know what. That's not big data, that's moderately popular website. And that's why so many websites struggle with this. Yeah I know, someone is like, but I only read from my DB. Well sure, but that's no good with Web 2.0 and stuff to not write anything to your DB.
System memory is of course, orders and orders of magnitude faster. If you can utilize that more for writes and not just reads, you are literally about orders or magnitude more I/O throughput, without having to change the underlying hardware. Sure, you trade some durability - but you know, it's rare for these DBs to crash anyway, especially now that they aren't as overwhelmed. It's a total shift in thinking, but wow, it really makes a difference. Now when you are talking about these memory backed DBs, a monkey could write one in Brainf**k and it will outperform the most optimized expert-made ACID DB.
Yeah it's a tradeoff. Almost everything in CS involves tradeoffs. But this is one of those tradeoffs where you gain a major quantity of something for losing minor quantity of something else. Maybe you don't want to make that tradeoff for like nuclear control system or something, but it's rare when you want to lose orders of magnitude more write performance for a little bit of increased durability (never 100% of course).