Actually, it's quite obvious. C#, Java, C++, C are sugar coated assembler. Reasoning about assembler, even sugar coated is a lost cause. Making those languages into something that can be reasoned about at compile and especially run-time would be practically impossible because of long hairy legacy that those languages carry around.
In order to run a program on a parallel hardware, run-time would have to reason about side effects to come up with some strategy to partition computational graph into work loads that have minimal interactions between each other.
If many core processors will have cores of different capabilities (which seems to be the case), run-time reasoning and JIT will be a necessity.
It seems like none of the existing imperative languages would survive transition to parallel era. Of course run-times are still be written in something that is sugar coated assembly, yet for general-purpose programming completely new languages would be required.
Declarative and richly typed presumably.
Also to the point of run-time reasoning and code generation, to provide fault tolerance computational graph might need to be re-evaluated if a computation node returns exceptional value or goes into non-termination state. That in theory would allow automatic remediation for run-away queries in databases and handling of non-responding services in the cloud (as well as mutating hardware - failed or hot plugged general and special purpose CPUs, failed or hot plugged memory and so on).
It probably will take another 10 to 20 years to get it right, but it looks like that's where things are going.