Tech Off Thread

5 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

Strange Socket Problem

Back to Forum: Tech Off
  • User profile image
    xgamer

    I am currently trying to interface one of our products with a linux/java based IPTV Solution. To communicate certain info my program acts as a .Net TCP Socket Server and accept a variable length data prefixed / affixed by STX and ETX characters.

    Having worked on similar interfaces it seemed straight forward. Normally socket.Receive method will take all the data sent by the client will return number of bytes in the buffer. However, in this case the data seems to be split in multiple buffers i.e. even if the client send a single stream of data, at the server it seems to get come randomly in 3-4 splits.

    I have tried to use things like socket.DontFragment = true or noDelay = true and set the socket buffer size and even while receiving read a large buffer ... but still it does not seem to work...

    Ironically a age cold based on VB6 with winsock seems to work correctly....

    Have any 9'ers faced any similar problems ? If so what was the solution ... 

     

  • User profile image
    BitFlipper

    AFAIK that is the typical behavior. It is your responsibility to keep reading the data in a loop and to reconstruct the original message.

  • User profile image
    evildictait​or

    TCP can and will fragment packets and sockets cannot tell the difference between fragmented packets and real packets. The best thing you can do is length-prefix your packets on the wire and wait for the full data to be recieved before processing them.

  • User profile image
    W3bbo

    , evildictait‚Äčor wrote

    TCP can and will fragment packets and sockets cannot tell the difference between fragmented packets and real packets. The best thing you can do is length-prefix your packets on the wire and wait for the full data to be recieved before processing them.

    Out of curiosity, how do web browsers know when a resource has completed being sent when it doesn't come with a content-length header?

  • User profile image
    evildictait​or

    @W3bbo:

    Content-Length is the default way for content which has a predicatable length before being sent (e.g. images), but for dynamically generated content that the server doesn't want to buffer the server can use the "Content-Encoding: Chunked" header.

    When this header is specified the server chunks up the content on natural boundaries (e.g. every Console.Write or every internal buffer flush) and then length-prefixes that chunk and sends the chunk. The final chunk is of length 0, and denotes that the entire stream is finished.

    Your web-browser abstracts this all away from you, so when you do a View-Source you'll see the reconstructed stream, but you can use Wireshark to see this in action if you want.

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.