we tested our tuio touchscreen setup today with beta 35 which was working fine in beta34.1 and after minute or so vvvv went down to 0 fps. deleting the VL based TUIO nodes made vvvv recover.
i tried to provide a test patch but even failed to get the sender and reciever help patches for tuio working. see screenshots.
ok, the full crash of the new tuio nodes happen in any version. we had to remove them and use the old nodes without VL.
providing a demopatch takes too much time right now as it would mean to record the tuio data somehow, play it back and reduce the patch.
anyway, it is a fact that the meltdown is connected either with VL or the tuio nodes since deleting them after meltdown or not using them, fixes the problem.
in the meantime please try to simply set the UDP (Network Server) nodes ‘‘Queue Mode’’ to ‘‘Discard’’. that works quite smoothly for me. still have to investigate why concatenate mode is choking vvvv…
Perhaps something like the tokenizer node is needed?
Loop (async) and accumulate the input and break the loop when it sees the end character such as /n or a newline character. Pass that data out to the tuio interpreter and listen for the next message.
i’ve been analyzing this a bit and i’m afraid the problem seems simply that our osc-implementation is too slow for a large number of messages. for me everything works fine even in concatenate-mode for a low number of tuio-objects. only when the number exceeds a certain value the processing takes too long on the receiver side which leads to more and more messages being accumulated for successive frames…
the vl osc-implementation therefore needs an overhaul. i see room for improvement there and will report back here once this is improved.
@guest that is not the problem, in fact the tokenizing is already perfectly handled as you can see in the vl-implementation.
@joreg
ok, so it’s an implementation issue and not a general VL speed issue ?
i would still vote for a soft change of established nodes to VL in order to avoid such problems. we use the beta for production and rely on established reliable nodes. something like this costs us valuable production time which is unnecessary and feels like you make beta users, alpha tester. please make such new nodes optional and not just replace nodes. these replacements are not beneficial but problematic.
btw. the device we use is a off the shelf touchscreen with 32 touches and tangible objects (which use touches for detection). we didnt have to provoke the meltdown, it just happened quite easily when testing little things. what i’m trying to say, this is not an edge case and we didn’t try to break it like that on purpose.
vl is a programming language that allows you to do things in different ways. i’m quite positive we’ll find a faster way…
i’d argue the change was as soft as it can be: the new tuio nodes were announced and available for testing in alpha releases nearly 5 months prior to including them in a beta-version. also the previously existing TUIO node is still there, only marked as legacy. so opening old patches should simply use the old node without you having to change anything. didn’t that work for you?
can you point me to the device, i’d like to see if i can read something about their tuio implementation. because i’m still wondering (apart from the actual slow-problem) why it wouldn’t work for you in udp-discard-mode. you write “in discard mode the incoming data doesnt work.” can you elaborate on that? i’d understand that you’d loose some motion-data but (depending on the implementation of the sender) you should still see all objects/cursors you put on the table.
Ok, since i make “stille post” here, there was a misunderstanding. Yes, it is as soft as requested and the legacy node was automatically choosen, but the patcher here switched to the VL nodes himself. He did it because the legacy node is not identical to the previous implementation he said. Anyway, i need to test the things myself before posting a comment like that. Sorry.
latest alpha has performance of the tuio splitters improved so that i cannot reproduce the chocking anymore with concatenate mode. parsing osc-strings was the culprit, which is now sped up. and still the splitters are much slower than the legacy tuioparser. so still room for improvements.
So maybe roll back to the proven working node until the prototype version is up to standard?
And would also be good to avoid more of those coming and have you test properly (both feature and performances) before to make assumptions that it’s production ready.
further performance improvements in latest alphas. getting there…
as mentioned before, no need to rollback anything since the previous version is still there. there was no breaking change. but true, we could consider removing the legacy-status of the previous version so it shows up again in the nodebrowser.