And yet browser vendors resort to dirty tricks to get smaller benefits - like pre-emptive TCP/SSL-negotations and DNS-lookups precisely to get millisecond speedups at the start of the page. Connection:keep-alive and Content-Encoding: gzip/spdy/deflate is another example where minor speed benefits warranted major changes in the protocol - so I don't believe that "the parsing cost is negligible" argument is, or has ever been valid for the web.
Secondly - have you ever tried to actually parse HTML? it's a really horrendous language to parse. The tokeniser takes O(n) on the text (including every comment and whitespace), and tokens are maybe five or six times longer than a bytecode equivilent - so tokenizing is right out-of-the-gate 5 times slower than a bytecode representation.
Next, there's the huge cost is turning these bytes into a DOM. This is expensive because HTML is badly designed.
The number of stupid rules in HTML parsing is one of the key reasons why sites don't work cross-browser for free. Rules like <b><p>foo<p>bar</b> turning into <b><p>foo</p><p>bar</p></b> and <b id="_1"><p><i id="_2">foo</b></i><p>bar</b> turning <p><b id="_1"><i id="_2">foo</i></b></p><p><b id="_1">bar</b></p> are cases in point.
And the redundancy in HTML is appalling. If you want a red bit of text you can use <font color="red">, <span style="color:red">, <span class="redcolor">, <div style="display:inline-block;color:red">. Want to put in a quote? How about using <blockquote>, <q>, <span style="font-style:italic">, <div style="font-style:italic;display:inline-block"> or even <div class="quotestyle_class">
Bytecodes require special tools to generate, increasing the barrier of entry.
The argument that HTML makes writing for the web easier is a complete lie. When I write a Win32 program, I can choose to learn C# or C++ or VB, and that will take me comfortably all the way through to a pixel-perfect app that works on anyone's machine, be it Dell or Asus or whatever. It'll just work.
And even with all of that, it still won't work for free on Firefox and IE and Chrome! You need to do vastly more testing of a minor website than an equivalent Win32 program.
And let's contrast the difference between if you get it wrong. If I screw up a C++ app, I might accidentally leave in a heap-buffer overflow. But DEP+ASLR+Heap cookies are likely to make exploitation of that really hugely hard. On the web, that buffer overflow won't be there, but you'll have SQL-injection, code-injection (via php include/eval etc) just littered about the app. You don't get security for free here because the unification of strings and code make it impossible to secure for free.
Bytecodes are harder to inspect, negating one of the things that makes the web so practical and open (Right Click -> View Source). This feature is I think is how a lot of people learned web development to start with, myself included.
Designing a language which goes out of its way to make programs written for it easy to plagurize is baffling. In my mind, it is a major weakness of the web, rather than one of its strengths.
I learnt C# without much difficulty using online tutorials, books and just plain old fashioned trial and error. I didn't need to learn by viewing the source of other people's apps. I don't see why web developers are such a special case that they can't learn like a normal person - you know - but not stealing other people's IPR.
Getting an agreement between browser markers on a bytecode will be impossible, especially given all the above. This is probably the most damning point.
Getting an agreement between browser manufacturers about anything is impossible, so I reject that argument directly. You want a box-shadow? You better be prepared to put in six completely vendor specific CSS names to get it. How about a linear gradient? Again, you're going to need some -moz- and -webkit and -o prefxes there. The list goes on.