I was trying to send 50 bytes every 40ms through tcp-ip, but the data is beeing received at 250 bytes every 200ms. I think windows is gathering data for some amount of time before sending it.
Is it possible to add a send-timeout-pin to tcp-ip modules, so i can configure 40ms as the max send-timeout ?
Neither the sending side or the receiving side of the TCP node has a possibility to set a timeout. The problem you are experiencing might also be caused by some network switch or router inbetween. You may be able to set the MTU size somewhere deep in your network adapter drivers setting.
Other posibility might be just sending longer strings. As long as the network is fast enough, this should not be much of a performance issue.
As Tonfilm already suggested, for realtime applications UDP might be a better choice anyway, as UDP will not try to retransmit packets, so the latency is more constant.
Indeed i have lowered the MTU to 250 but this affects performance of everything that is network related on my PC…
A friend who is java programmer, told me there is some kind of flush possibility in the deep-down regions of TCP. The Windows OS tries to put as much as possible payload in tcp packets by buffering data for just a little while.
Your suggestion of filling-out the strings to the max-payload length defined by the MTU size minus the length of the packet-headers is a reasonable work-around. I am going to try to build this into my vvvv project :)