, Bass wrote

*snip*

Yup. Also the "initialization" time of switching between channels is not too bad. I got a quad-core and spinning up some processes until they allow interactive input can take seconds.

Which only adds to the argument, "Why does starting a process have to be different to switching to one? Why can't they always just be 'running'?" FWIW, Apple have been trying to move down this road (with varying degrees of success) ever since the introduction of OS X and the Dock.

, Bass wrote

*snip*

Channels don't have impossible to predict behaviors either that take random amounts of resources from the TV at any given time. Nor are they really stateful in any important way.

Like all analogies, it only goes so far. Regardless, the OS has to do this anyway, any process running on your machine might suddenly demand random resources and it's the job of the OS to provide it what it needs without affecting anything else (and certainly without disrupting the responsiveness for any interactive user). How well it can do this is a metric of how good an OS is.

Look back at the Classic, pre OS X, Mac OS for example. You had to tell each application in advance how much memory it had, which Mac OS would then dutifully allocate. If an app tried to use more than the user had allocated, it failed, even if the system had plenty of free RAM. You could have argued, at the time, that doing so prevented any one application's 'random' behaviour from impacting the rest of the system and gave the user more control. You wouldn't even think of arguing that today though, the idea of burdening the user with such trivial resource management sounds ridiculous. Burdening them the process management is no less ridiculous, it's simply extra work that detracts from whatever it is they're really trying to accomplish.

, Bass wrote

*snip*

Also if you let the OS randomly ("intelligently") kill processes you can run into problem where the process was doing something important and now it gets stuck in a state where the next time it runs it has to read from a journal, further degrading performance. That's assuming it is well designed and can even properly recover from invalid states.

That's only an issue if the system design doesn't include a mechanism for the OS to indicate "you're going to die in X seconds, get into a safe state now" which might be true of old-style systems but certainly doesn't have to be the case and isn't with, for example, Metro style apps (IIRC recent versions of OS X include something similar too).

And a system that works like this, by default, gets substantial advantages too. Applications have to save state as they go, system updates can be applied whenever it suits and all applications have to be able to pick up and carry on from where they were. Also continuous checkpointing mean that applications are also far more likely to be robust enough to cope with an unexpected outage (power loss, bluescreen etc) than applications that live under the old (but naïve) assumption that they have a traditional start/work/quit-at-their-own-pace lifecycle.