I'm not sure why such an elaborate system is required to map old->new types. At the beginning of the binary data, simply store the struct's field names, types and offsets. Then when deserializing, examine the new structure (it has to be able to instantiate it so it obviously has full access to it - yes, using reflection), and map the previous names, types and offsets to the new structure. Ignore fields that are missing in the new structure, set new fields in the new structure to its default values, and store the new values at the new byte offsets. If a field with the same name now has a different type (say float to double), try to see if there is a conversion available, and if not, use the default value.
I fail to see what information is missing from the above mechanism that would require me to provide something like Func<TNew, TOld>? Can you elaborate a bit on why it would be required?
They don't use pointers internally. They use reflection. Using pointers would run the risk of getting screwed over by field alignment differences between 32-bit and 64-bit .NET apps, and little-endian versus big-endian problems (since .NET is technically endian-agnostic). It also runs the risk of leaking information that is hidden in the padding between fields which might be usable in applications like Silverlight to attack the process.
Most of the data manipulation inside the .Net classes happens via native calls (for instance use ILSpy to look at Marshal.Copy, Array.Copy or even string.Compare), so why would this be any different? And why would I be forced to use "unsafe" in this particular case?