TUIO nodes beta35.2 problem

we tested our tuio touchscreen setup today with beta 35 which was working fine in beta34.1 and after minute or so vvvv went down to 0 fps. deleting the VL based TUIO nodes made vvvv recover.

i tried to provide a test patch but even failed to get the sender and reciever help patches for tuio working. see screenshots.

this should be fixed in b35.1, see: TUIOCursor (Network Split) bug

can you confirm?

help patch works in 35.1
the crash needs further testing

ok, the full crash of the new tuio nodes happen in any version. we had to remove them and use the old nodes without VL.

providing a demopatch takes too much time right now as it would mean to record the tuio data somehow, play it back and reduce the patch.

anyway, it is a fact that the meltdown is connected either with VL or the tuio nodes since deleting them after meltdown or not using them, fixes the problem.

ok, can you still be a bit more specific?

  • which of the TUIO nodes did you use exaclty? only splitter? which one? or also the bundler and joins?
  • how does the crash look like? red nodes only stopping to return values? or complete freeze or crash? - does tty report anything?
  • how fast does the crash happen? simply when receiving a single object/cursor? or only after a long time with many objects?
  • the TUIO nodes object and cursor split to just receive data
  • UDP set to concatenate
  • no red nodes, just framerate going down to 0
  • seems like data overload since if you don’t generate TUIO data for while, the patch recovers
  • it is many objects who kill the nodes
  • TTY says nothing

confirmed i can reproduce this. thanks for the report…

wow, thanks for the detective work

in the meantime please try to simply set the UDP (Network Server) nodes ‘‘Queue Mode’’ to ‘‘Discard’’. that works quite smoothly for me. still have to investigate why concatenate mode is choking vvvv…

we need concatenate because in discard mode the incoming data doesnt work.

Perhaps something like the tokenizer node is needed?
Loop (async) and accumulate the input and break the loop when it sees the end character such as /n or a newline character. Pass that data out to the tuio interpreter and listen for the next message.

Would that be doable?

i’ve been analyzing this a bit and i’m afraid the problem seems simply that our osc-implementation is too slow for a large number of messages. for me everything works fine even in concatenate-mode for a low number of tuio-objects. only when the number exceeds a certain value the processing takes too long on the receiver side which leads to more and more messages being accumulated for successive frames…

the vl osc-implementation therefore needs an overhaul. i see room for improvement there and will report back here once this is improved.

@guest that is not the problem, in fact the tokenizing is already perfectly handled as you can see in the vl-implementation.

ok, so it’s an implementation issue and not a general VL speed issue ?

i would still vote for a soft change of established nodes to VL in order to avoid such problems. we use the beta for production and rely on established reliable nodes. something like this costs us valuable production time which is unnecessary and feels like you make beta users, alpha tester. please make such new nodes optional and not just replace nodes. these replacements are not beneficial but problematic.

btw. the device we use is a off the shelf touchscreen with 32 touches and tangible objects (which use touches for detection). we didnt have to provoke the meltdown, it just happened quite easily when testing little things. what i’m trying to say, this is not an edge case and we didn’t try to break it like that on purpose.

vl is a programming language that allows you to do things in different ways. i’m quite positive we’ll find a faster way…

i’d argue the change was as soft as it can be: the new tuio nodes were announced and available for testing in alpha releases nearly 5 months prior to including them in a beta-version. also the previously existing TUIO node is still there, only marked as legacy. so opening old patches should simply use the old node without you having to change anything. didn’t that work for you?

can you point me to the device, i’d like to see if i can read something about their tuio implementation. because i’m still wondering (apart from the actual slow-problem) why it wouldn’t work for you in udp-discard-mode. you write “in discard mode the incoming data doesnt work.” can you elaborate on that? i’d understand that you’d loose some motion-data but (depending on the implementation of the sender) you should still see all objects/cursors you put on the table.

Ok, since i make “stille post” here, there was a misunderstanding. Yes, it is as soft as requested and the legacy node was automatically choosen, but the patcher here switched to the VL nodes himself. He did it because the legacy node is not identical to the previous implementation he said. Anyway, i need to test the things myself before posting a comment like that. Sorry.

The device is actually from Berlin

latest alpha has performance of the tuio splitters improved so that i cannot reproduce the chocking anymore with concatenate mode. parsing osc-strings was the culprit, which is now sped up. and still the splitters are much slower than the legacy tuioparser. so still room for improvements.

So maybe roll back to the proven working node until the prototype version is up to standard?
And would also be good to avoid more of those coming and have you test properly (both feature and performances) before to make assumptions that it’s production ready.

further performance improvements in latest alphas. getting there…

as mentioned before, no need to rollback anything since the previous version is still there. there was no breaking change. but true, we could consider removing the legacy-status of the previous version so it shows up again in the nodebrowser.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.