OSC Server queuing/delaying messages

hey,

i’ve got a simple patch acting as a receiver for simple OSC messages arriving with high frequency (around 50 fps) over wired ethernet connection. works totally fine but after a few hours the server seems to queue up messages and spit them out with a very noticable delay (increasing with time and growing from seconds to minutes).

it looks like the problem is on the VVVV side as even after a phyiscal disconnect of the sender the messages arrive (until the internal queue is empty?). or might it even be the OS doing these things?

interestingly, dis/enabling the OSC Server does not help - a noticable delay stays there even after all messages have arrived, no messages are being send and i restart sending. however, with a restart ( F8/F5) all is good again.

i looked inside the nodes but could not really see where this queuing up would happen. UDP messages keep arriving at the UDPServer even after disconnecting the sender. my understanding also would be that udp messages not processed immediately would just be dismissed.

what would you suggest to further debug this (which is quite timeconsuming because one has to wait a few hours until this happens)?
what are possible workarounds? putting the whole receiver logic in a ManageProcess region and periodically restart the receiver?

thanks!

Have you tried sending (& receiving) at a higher frequency? If so, does the problem occur sooner then?

still running these kinds of tests (increasing send data, increasing send frame rate, decreasing receive framerate) but this is really timeconsuming.

in my last test the described behaviour appeared after about 3 hours. i noticed the following:

  • de/enabling the osc server resulted in not being able to receive messages any more
    • a look in sysinternals TCPview showed that the receiving port appeared to be open twice ?! (would be logical to not receive anything when another instance was still bound to this port
    • closing vvvv did not close this port -> i therefore assume the socket somehow crashed. i had to restart the pc

i don’t think though that i had this behaviour before.
will report further findings…

some clarifications to the post before:
usually when de/enabling the oscserver one can see the port closing/opening in TCPView. not this last time, the port stayed open even when oscserver was disabled. reenabling it opened it a second time…

could you take the OSCServer out of the equation and only use a UDPServer for testing?

next test:

  • (luckily) did not have any socket crashes like before

my findings with increased send and data rate:

  • osc device sends at 100fps, patch runs at 60fps
    • after a clean start the patch receives data instantly - all looking good
    • a while later a delay appears at the receiver. this delay appeared much earlier than in former tests (probably due to increased send rate).
  • in this state:
    • decreasing send rate to 20fps seems to make the patch catch up and response is instant again
    • then: increasing send rate to 100fps immediately causes the delay again (also ‘ondata’ of the receiver flickers, which it shouldn’t when the send-rate is greater than patch framerate).
    • going back to 20fps send rate makes it recover
    • switching between 100 and 20fps again shows the same behaviour as before
  • final sanity check: restarting the patch (F8/F5) while the sending device is at 100fps and the patch is good again, which indicates that the patch (and not the sending device) is the troublemaker

for me, this is good news because a decreased send rate is still good enough. the situation just leaves a strange taste in my mouth as for long running patches it does not feel very safe.

@joreg testing a raw udp server would be a good idea. also i did not test a setup with two patches (send/receive) to 100% rule out that it is my device that causes troubles…

To me it sounds like the framerates should be the same. Have you tried 60fps at both ends?

well, this also happens when the sending framerate is slower than the patch framerate (initially had 50fps send rate vs. 60fps patch rate (but several messages that are sent at once with each transmission).
my mental model of all this that the patch is supposed to process all received messages in a frame (that’s what the reactive sampler does). if, for some reason, the receiver cannot keep up i’d expect the receiver to discard the messages (like it is common with UDP messages on the internet) and not queue them up. if this was the case, i’m missing some kind of flush option to clear the queue. also there is no indication that arrived messages are “out-of-sync”.

still, i’m not 100% sure where this “queuing” like behaviour happens (network, OS or patch level) and would have to do more experiments to rule out possible responsible components.

well, i was too quick to call this a solution for me. even with a decreased send rate (20 fps) i get a delay of arrived messages after a few hours running this thing.

checking the output of a plain UDPServer right now…

further investigation showed the following:

  • when the delay happens:
    • switching to a UDPServer and looking at the raw OSC data shows that the messages arrive immediately (i kind of manually filtered and decoded the OSC messages to have proof)
    • when switching back to the OSCServer the delay is still there. however i noticed the following: reducing the no of simultanously sent OSC messages with different topics (on the sending device) to only one makes the delay disappear. increasing the number of sent topics again makes the delay appear again.

my takes from this are:

  • UDPServer works fine
  • OSC receiving logic has some issues when it has to process several different OSC messages with different topics arriving at the same time (after a while).

for now i will probably fall back to sending raw UDP messages and do manual decoding on VVVV side because of this behaviour (which is unfortunate as using OSC would make life much easier for some scenarios…)

edit: still have to check whether sending only one topic does not cause any problems…

just for documentation:
for stress-testing i send 4 topics at 100 fps

  • one with 3 ints
  • three with 39 ints

update: i just have this delay after a while even with a single topic (39 ints) sent at 100fps :(

running a test now where client and server are running in the same patch (hoping to create a test where this can be recreated even without my osc-sending device).

here is a little stress-test for sending and receiving osc messages.
watch the received messages and see how the reception quality degrades over time. on my machine first the no of received messages in higher topics drops, (much later) this applies to all topics and even a delay becomes apparent.

please check and also look at the test-design (whether it makes sense).
OSC_Stresstest.vl (39.5 KB)

grafik

After running the UDPServer for over 6 hours now with manual decoding of the OSC messages everything still works like a charm. no delays and data seems to be received every frame.

can you show us your “manual decoding” approach?

sure. it’s simple because i only send ints in one message:

Outside:
grafik

Inside: DecodeDiffs
grafik

thanks for the extensive report. we may have found the issue. please try with latest VL.IO.OSC 1.0.7. with it, when receiving more messages than can be handled, you should see a “dropping” rather than a “queueing” behavior (as you initially also anticipated). please report if this solves your issue.

(unrelated: also added an “Address” output to the OSCReceiver nodes, where you can see the learned address)

1 Like

thanks for the quick fix, will run a longer test tomorrow and report!

regarding the address learning: for me there is no address output pin but rather the learned topic is written into the addres INPUT pin of the OSCReceiver. what’s odd: rightclick->configure->“show address” does not have an effect (pin is not shown/hidden). may be some kind of internal conflict?

Did not notice any further hiccups after the update. Thanks for the fix!

thanks for pointing this out. fixed for 2021.4