If you think type inference in real programs is quick, easy or accurate, I suggest you go work for a compilers team for a short while. They will disfranchise you of this opinion.
Perhaps then you will understand why knowing what a type is allows you to generate faster code than trying to guess what it is.
Explain when it isn't accurate. In your example, it would be trivial for the compiler to know the type, the parens give away the string and the fact that the value is a number gives away the int. Thus the compiler could optimize the function call without losing the advantages of dynamic behavior.
I'm trying to understand the instances when you wouldn't know what a type would be ahead-of-time. The only times I can think of involve metaprogramming and indirect calls (ie. polymorphism/interfaces/virtual calls, like using something like an "IAddable" and one function, which is what static languages use to get around the lack of duck typing in the language).
I didn't say writing an optimizing compiler is easy for dynamic language (or for static language even), just that I don't see what static languages inherently buy in performance (ie. static languages are "inherently" more optimization). If there isn't a inherent performance advantage, I want to see at least some example of a hard AI problem involved in decorating variables with types.