first of all thanks to the developers of vvvv for distributing this fantastic “multipurpose toolkit” for free to the community.
i am a student relatively new to vvvv and currently working on a project i am still struggling to get hold of.
in short my concept is:
i have to MacBookPro Laptops facing each other. each isight-webcam is “watching” the other Computers Monitor.
Based on this setup i intend to create a “visual chat” between the 2 computers.
I imagine a visual conversation between two computers “ping-ponging” graphical elements.
The aim is to also document my experiments in form of a book.
i already managed to put together some elements in vvvv
and made some first experiments with various tracking tools (cotour, trautner, color,…)
but there are still a couple of details and howtos that trouble me. (although i am pretty sure it is technically not very difficult.)
how to “syncronise” the two vvvv-files running on the two seperate computers
how to define the “rhythm” of the conversation; computer A is scanning the Monitor of Computer B for a certain time - then displays “what he sees” - then Computer B is Scanning the Monitor of Computer A while Computer A…
how to best save the images produced (every step of the conversation)
I would greatly appreciate to talk some things over with advanced users
and any recommendations or further suggestions are very welcome aswell!
-Best way to synchronize is to put Pcs into some sort of loops/intervals. So, PC1 is doing a certain action until a break action is being send/shwon by Pc2. Then Pc1 enters next loop/interval. Thus you keep it simple and they communicate step by step.
I recommend a time frame for each loop in order to avoid a hanging up of one of Pc’s. After this time next loop will taken as a precaution. Story will be pushed forward despite some technical problems occurs.
-To achieve this your starting point should be Automata.node, which is programmable with a specific logic. You can define your loops there and jump to next loop at a certain event.
since MacBookPro Laptops are capable of connecting via Wireless LAN, a more reliable setup would involve the two Laptops actually communicating over the Network. that way, each machine knows when the other one is finished ‘scanning’ and displaying a new image. that circumvents all the problems you might get from two independent programs, such as getting them to start in the right interval, preventing them from phasing out of sync etc.
(on the other hand, those could produce some very interesting results.)
for saving images, there are handy nodes like Writer and Screenshot.
okay, I was thinking in some other way of synchronize them (more like the independent approach described in that IBM paper).
Anyway, you can connect them by LAN. Easiest way is probably to boygrouping them. Easiest way in terms of learning is to use TCP.node and send some strings, checking them (string equal node) and trigger your logic/Automata.node.
Independent way is not to connnect the PCs by LAN but by visual feedback. PC1 is analyzing the output of PC2. You can take a specific color like red for 1 sec on screen to mark the end of a sequence done by this PC (read it out by Pipet.node). The other PC recognizes it and can start with its answering sequence.
This is more like communication I guess. Even if there can show up some disturbances or obstacles - but hey, communication never has been free of misunderstandings. As long as PCs dont hang up, this way seems more interesting for me.
LAN is a good point but i think conceptually for my project it its more interesting to use the method suggest by frank using “visual feedback”.
That highlights the idea of visual communication between the 2 PCs that are not connected otherwise than with their cameras and screens.
A further issue for me is how to best start the whole system. been thinking about different possibilities.
Q: Is it possible to do something like uploading an image to a webpage and one PC grabs it - or to send an email with image attached?
something like a remote access would be nice…
thanks for the hints kalle; gave me good idea on whats possible.
will get back there at give time.
after experimenting with several of the tracking-tools available for vvvv i found that the contour works best for my purposes so far.
Problem with color-tracker and others is that they need a predefined setup with defined objects and colors.
As it is important for my project to keep the setup flexible and open to various changing inputs they dont really suit.
But then contour allows me only to work in black&white - any workaround imaginable to get the colors of the tracked objects too, even if i dont know the color beforehand?
Yes, you can specify the position where to setup/learn a color at colortracker.node. So, you can define a small region at opposite pc screen, observed by your colortracker, where a new image will always start.
To make it more flexible, you could put contour.node first and check which area at screen is changed. Take center position of that blob and check it with colortracker for init color. As a result, you have your colortracker learned a new image to follow/recognize.
If it doesnt suit your needs, you can also define (not too sure) up to 8 colors in advance and give them a broad recognition range by default. Thus your Pc can recognize 8 differently colored blob regions. Just make sure your opposite Pc is limited to this amount/range of colors.