Well, I'm currently working on low-level systems programming using mixed C/C++ that involves things like async IO (the actual low-level async IO implementation), and one thing I can say is that it is a PITA. All of my testing tools are written in C#, so I get a 50/50 mix of the two environments.

What I find is that I spend an awful lot of time fudging with low-level issues and gotchas in C/C++ that could have been avoided in C#. If I spend the same amount of time in C# doing the same thing, I can end up spending more time working on the actual high level algorithms etc which in itself results in optimizations.

An extreme example would be assembly vs C#. Yes you can eventually come up with a faster implementation to the same problem but while you try to get the hard-to-debug-and-maintain assembly code working in the 1st place, the C# implementation would long since have been done and you can focus on improving it in other areas, like adding caching algorithms etc that will eventually result in quite performant code.

Also take into consideration that the problems we need to solve are becoming exponentially more complex, and you realize we can't be futzing around in a low-level language trying to solve problems that should not have been there in the 1st place (obscure memory leaks and bugs, fighting with the single-pass compiler because it's too stupid to recognize the definitions a few lines down in the same file, packing issues, calling conventions, header files, nasty STL syntax and usage, confusing compiler error messages, and whole lot more I forgot right now).