@bondsbw:

The idea of a bytecode for the web was proposed IIRC, and I supported the idea and proposed it here (perhaps independently?) a few years back. I'm actually now quite opposed to any idea that would introduce any kind of bytecode to the web.

  • Bytecodes don't necessarily translate to better performance: people overrate the amount of effort it takes a modern computer to parse a few kilobytes of text. IIRC the amount of effort needed to generate a DOM and actually display stuff on the screen is where most of the work is; as well as manipulating and rerendering the DOM after the fact. Also, HTTP supports compression which reduces the size of the HTML. Performance is not optimal of course, but there are bigger flaws in HTTP that are more low hanging fruit (ie. won't fundamentally change how people develop things) that still haven't been universally addressed (Google has tried with SPDY aka HTTP 2.0, which IIRC is supported in Firefox and Chrome but nowhere else).
  • Bytecodes require special tools to generate, increasing the barrier of entry.
  • Bytecodes are harder to inspect, negating one of the things that makes the web so practical and open (Right Click -> View Source). This feature is I think is how a lot of people learned web development to start with, myself included.
  • Bytecodes can cause an overspamming of programming languages in the market, making web development more complicated and more to learn unless you want to limit your experience on one specific stack. Basically, your skills might not transfer. There are JS-as-bytecode type languages (Microsoft made one called TypeScript, and I personally use CoffeeScript), but don't try to be TOO different from JavaScript, probably because of the limitations of making JS a compile target.
  • Getting an agreement between browser markers on a bytecode will be impossible, especially given all the above. This is probably the most damning point.