That sounds extremely inefficient to me. How does this work with, say, image files or audio files? How would you implement something like ReWire that allows different audio applications to share audio and MIDI streams in realtime?
I guess back when they thought this was a great idea, computers weren't very powerful anyway so the datasets they worked on were probably very simple and small (mostly text-based anyway).
I realize today these file descriptors can be a byte stream (like a socket) but the idea that everything can be converted to/from text just doesn't work today.
I think even streaming bytes isn't enough. Sometimes you need objects so that you can query just the properties on it that you are interested in. How do you get just the size of a bitmap if the only option is to stream the whole thing? For very large images this is inefficient because now the image needs to be duplicated in multiple places.