This is a pretty big deal, considering the last CEO called Linux a cancer. :)
That's a different website, consequently going by a similar name. I'm sorry, but this won't be as easy as the same Google searches I did. :)
And no, it is not impossible to sanitize user input. Thats a pretty odd claim.
Yes, it is actually impossible to sanitize user input for all arbitrary vulnerabilities (Cohen; 1998). You can sanitize for some list of known vulnerabilities, but sanitizing for all possible vulnerabilities is proven to be beyond computability. In the "non-computable function (eg. halting problem)" sense. Finally, Turing machines are relevant here. :)
and it's not about human readability either - python doesn't suffer this, and nor do wscript files or batch files. It's about the way html is generated on the fly as though it were a text file, instead of recognizing its a Dom.
Right. The same underlying problem though, if I generate dynamic Python files by cat'ing user data with them, I get the same problem. But that would be silly, no? That's not the correct way to develop web applications either.
I think I get what you are trying to say. So your beef with HTML is that is human readable. It comes from the fact that HTML is understandable to humans. So people will code methods like cat'ing user data with the page. But that's just a bad programming practice and easily avoided. You can in theory serve up any kind of data this way, not just HTML. But people don't understand machine code for instance, so they are less likely to program in this manner. But at the same time, the fact that HTML is human readable is a big reason for its success and popularity, I'd think. :)
It's a bit like why we even use the von Neumann architecture to begin with, the code-as-data idea leads to many problems but also simpler and more flexible computer systems. Security is important, but it's only a tangential concern. If you give security concerns veto rights over all decision making the end result always ends up being a non-functioning system. Because a non-functioning system is the only truly secure system.
If it's security FUD, surely it won't be hard for you to find a single counter-example. Come on, Bass, I'm really low-balling it for you here. Just ONE.
And yet it doesn't affect other platforms that allow a translation between code and data - like WinForms. You can reflectively load code from a string, but I can point to literally hundreds of WinForms apps that never did, and are secure - but you cannot point to a single non-trivial website that has never had an XSS or CSRF vulnerability.
No. I'm saying that XSS and CSRF are fundamental problems in the design of the web, because substantially all websites have had one or other or both at some point in their past - even when designed by the best engineers in the industry, like engineers at Microsoft, Google, Facebook and Twitter, and companies with huge wallets like banks.
If you can't even find a single example of a non-trivial website that has never had XSS or CSRF, then they are not "just an implementation bug" or a problem with the fact that the web's languages are Turing-complete - it is an inherent and critical flaw in the underlying system itself.
XSS and CSRF are systemic platform vulnerabilities in the way that memory-corruption vulnerabilities are systemic platform vulnerabilities in C/C++ - and the fact that XSS and CSRF exist is not because the web is a von-neumann architecture (which it isn't), but because the web was fundamentally mis-designed in the 90s.
If HTML wasn't generated by a script, but was generated by direct manipulation of a DOM, and HTTP wasn't stateless, it was stateful, these vulnerabilities would simply not exist. The fact that they do is because of a foundational problem with how the web was designed.
Hacker News maybe? I couldn't find anything on Google about it having a XSS vuln in its past, and I figure if it did someone would find one given its audience. :)
I provided evidence (see Cohen; 1998) that any system that allows for user input can be compromised, and not only that, that we will NEVER truly fix this, because it's mathematically impossible to perfectly sanitize user input from all possible vulnerabilities.
All and all, most garden variety XSS vulnerabilities are actually fairly trivial to avoid and come from sloppy coding. Obviously, there is always someone sneakier then you think, but remember, mathematically impossible. :)
And then I listed all of the major tech companies, and several banks.
But yeah, I look forward to your counterexample.
Still not substantially all. I'm sorry if it seems like I'm being pedantic, but you made a really egregious claim that came off as security FUD.
Any system that treats data and code interchangeably requires active mitigation, and even so, it is impossible to guarantee safety, as Cohen has proven. Since von Neumann systems treat code and data equivalently, it follows that it is actually impossible to guarantee that a user input will not be malicious. So forgive me for not understanding how your beef with the web doesn't affect every other von Neumann system in existence.
But let me understand, your argument is that the web itself flawed because popular and complex software systems that happened to interact with the web have even had vulnerabilities in the past?
That's not how it works. You made the claim, you have to do the work to justify it.
But I'll get you started by ruling out Bing, Google, Yahoo, Twitter, Netflix, Gmail, Facebook, about 50 XSSes in OWA, Sharepoint and Wordpress, in jQuery itself, in mail.ru, hell - even Uber, banks including Natwest, Bank of America, Linkedin, Pintrest, Hotmail, Healthcare.gov - the list goes on.
Thanks, that's only a tiny fraction of the web too. You have a lot more to go to get to substantially all.
I've security reviewed thousands of different websites, across Microsoft, Google, Twitter, Yahoo, a huge number of financial institutions, major supermarkets, gambling websites, government agencies, defense companies, startups, you name it.
And yet I have never seen a non-trivial website that didn't have at least one XSS. They always have an XSS somewhere, and if not now, at some point in their history.
Hell, just look at Hackerone. Most of those bugs you see there are XSS and CSRF bugs. And literally every single company that lists itself there gets popped because the web's foundation is made of sand.
So yes. I stand by that assertion.
But, please, be my guest and prove me wrong.
Find me a single example of a non-trivial website that has never had an XSS. Go on. I dare you.
And I'm the the material being of Vishnu-Jesus-Ra, the creator of reality. Mark my holy words: you are wrong.
Listen, all I know about you right now is you are just an anonymous person on the Internet capable of conversing in the English language. But that's not even relevant. If you can't articulate your argument in a logical, scientific manner, it doesn't even matter who you actually are.
1. Because websites are not running on a simple von-Neumann architecture, but they are Turing-complete (code and data are not directly interchangable, except via explicit translation gates, such as "eval").
2. Ask literally anybody who works in computer security.
Von Neumann developed a specific kind of computer architecture, not the only one that can implement computable functions. In fact, not the only one you can build, nor the only one in existence today.
Oh and if I say substantively all redheads have a green poke-dotted dress in their closet, you must make the assumption that I have visited the private homes of substantively all redheads. A ridiculous assumption.
Likewise, I must assume you have the source code to substantively all web applications. Which is equally a ridiculous assumption.
Unless of course, you are omniscient, but your statement in (2) betrays that anyway. :)
A von-Neumann machine is Turing complete. But whatever.
You seem to be missing the point. Turing-complete systems by definition can have vulnerabilities of any other Turing-complete language for the simple reason that they can emulate each other. But we're not talking about the pathological edge case of emulation. We're talking about the default case of what most programs actually do.
This is the difference.
I can code an XSS bug in a kernel-driver by making my kernel-driver emulate a web-browser. And yet despite the fact that XSS vulnerabilities could affect kernel-drivers, you'd have to look pretty hard to find a single-example of a kernel-driver suffering from an XSS.
On the other hand, substantively all web-applications have XSS and CSRF vulnerabilities because of the way the web was mis-designed in the 90s.
Why do you keep bringing up Turing complete systems? A system can be Turing complete and not utilize a von Neumann architecture. You are also making impossible to satisfy claims too (eg. your last sentence).