I'm just trying to have a conversation around developing new (revolutionary versus evolutionary) methodologies versus modifying old ones to exploit the advancements in hardware in the most effective manner as possible. Auto-parallelization at the machine
level is pretty much science fiction without explicit support at the expressive level way up the abstraction stack.... Or is it?
Of course, throwing everything out that's been invested in for so long is unrealistic, but this is why theory is fun
There is simply some practical problems that can not be parallelied effectively, if algorithms depend on intermediate data you are SOL until that intermediate data is computed. It's not science fiction it's logicial impossibility. This whole parallel affliction
is one of the worst things to ever happen to the software industry. I don't think it's something that most software developers should have to worry about.