There is simply some practical problems that can not be parallelied effectively, if algorithms depend on intermediate data you are SOL until that intermediate data is computed. It's not science fiction it's logicial impossibility. This whole parallel affliction
is one of the worst things to ever happen to the software industry. I don't think it's something that most software developers should have to worry about.
Unfortunately this "affliction" is caused by the laws of physics that prevent us from linearly scaling up the speed of CPUs. So unless someone can come up with a completely different way to make CPUs that doesn't have this issue, we're stuck with it.