, evildictait​or wrote

I think, actually, that Joe has fallen into the fallacy that XML is a suitable solution for a high performance project. If you're storing your data as XML, you better suck up the fact that your data is in a human readable rather than a machine efficient storage format and that getting data out of it probably shouldn't be in a hot loop (and if it's not in a hot loop, why do you care about its performance?)

The writer of the XmlReader probably realized this, and therefore (correctly) assumed that optimising the XmlReader for the side-case of someone loading it on a microchip who really cares about gen0 collections is optimising for the wrong case. I'll put money on the writer of the XmlReader class wanting to write code that is correct, easy to use, easy to read and easy to fix when the XML spec changes in future, rather than wanting to optimise away all of the almost-free allocs in the code.

In fact, I challenge Joe Duffy to find an implementation of an XmlReader on a native platform (like C/C++) that is complete with regards to the XML spec and contains no short-lived allocations.

Indeed. If you want to write your own XMLReader, you have to take care of the parsing difficulty introduced by flexibility of XML format. Or you'd be just introducing a fixed text-based format that looks like XML.

And because .NET string are reference based pooled strings, unlike unmanaged C++ where one string means one buffer. I think the actually allocation occured may not be as bad as he predicts. Of course, we need a test to confirm this.