I don't really think you can generalize on this. For simple interactions like the examples given, schema and a shared type is overkill. But in a large complex transactional system, with 100 or 1000s of fields it simply doesn't make sense to avoid schema.
If there's no machine readable schema then each developer has to write their own serializer and deserialize code. A process reading a complex message of 400 or more fields in a structured document would have verify the presence of each field before they use it. This can lead to monotamaus, buggy code.
It's far easier to automate the task of verifying the message against one of the valid schemas. Providing for basic for rules for non-break minor changes is a must. The validation code can be generated - and once you have a valid message you can deserialize it so that you can programmatically access the contents.
Resorting to manual message validation and serialization as a way of solving the issue of tight coupling is really solving the sympton rather than the cause. The real issue is to have a versioning strategy with a flexible automated code generation system that is designed for loosely coupled providers and consumers.
A good practical example of this is Google Protocol Buffers extension mechanism. With ProtoBufs you can partially deserialize from a code generated class and load an extension schema in dynamically at runtime to access additional data. Much like C#'s hybrid static and dynamic typing.