I’m new in the use of v4. In relation with my thesis about non-linear visualisation I have learn how to work with vvvv. So I got several questions about the using of Kinect and Arduino. I searched the forum but can’t find any helping stuff for me. At the moment i try to get Input from 5 push-buttons via Arduino to change settings like the shape, rotation or color of objects.
What is the best SDK for using the Kinect, openNI or the Microsoft SDK or rather what works better for what.
How can I get Outlines of tracked persons (In use of Prototyping Interfaces I get the skeleton back but no outlines)this must be possible with the depth texture I think but can’t get it to work for me right now.
*How can i change the amount of persons they are getting tracked by Kinect (with the MS SDK and the patch from Prototyping Interfaces the Kinect will only track one skeleton. I read about the Kinect depth texture and some kind of cut off for the depth note but can’t get it to work.
*Is it possible to get the outlines/shape of several persons and say v4 to overlay them with trinangles or some kind of nice particle effects in diffrent colors depending on the distance?
*I’ve seen some nice vids with tracked people standing in front of the kinect and they where tracked. How is it possible to say v4 to not show the room but only render the persons in front of the Kinect.
*I would like to get some colorchanging effects by using a audio-input, got this for me right now. But try to use only low- and high-frequency sounds (At the moment i try to find a filter/node that makes this possible for me)
*Is it possible to use Line-In for external sounds? v4 shows me the opportunities PC-microfone, kinect-microfone and soundcard output. Do I need some external hardware to get this for me? I tried to connect the iPod with the Line-In but nothing happens. I think thats a settings problem and can be fixed very quickly. Will try tonight if I can fix it.
Thats a lot of questions hope you can help me with of these. are they understandable. I stay tuned and try to get some problems solved but now I play wiht the Input via Arduino.
i recomend Microsoft kinect drivers (i stuggled with the others, then got a windows 8 laptop, and everything just works now. …but yeah, you need windows 8!)
look at Pipet and Trautner. ( Player (Kinect Microsoft) node will give image cutout ?)
pretty sure kinect has a 2 person limit. combine with id and depth to get tracking area.
Pipet, Trautner, Player to get points/areas you can then attached things to.
look into FFT (Fast Fourier Transform) (…watch your processor go through the roof
you probably can’t get at your line-in because of the drivers. (eg. i cannot get line-in on my laptop, but i can on my tower. i can however get the line-in(s) from my Tascam external soundcard on both computers.)
all the information you require is somewhere on the website. i suspect you have the same prob as me, you have to know the name of something before you can look it up! your questions also make me think that you have jumped straight in at the deep end.
the best thing to do with any vvvv problems seems to be make up a patch and post it so people can have a look and suggest things for the gaps.
I cant help with the kinect, but for the audio, in the vvvv folder girlpower, there is a folder called the “next generation” in there “audio”, Check them all out, last ones are cool GFX. I love nr 21 :)
I will try to to get some new stuff tonight and post the patch with questions here.
Some new question I have. I’ve found a nice particle effect that I want to use. It’s some kind of a spinning sphere. So the problem that I have is that the rotation comes from the camera and not from the sphere it self. Is it possible to get the sphere in rotation and set the cam to a fix point? I played around with the transform node but nothing works fine for me. At the moment I’m on work so I will post the patch later.
The nescience of how which node works and what is name is makes it hard to find answers. At the moment I work with prototyping interfaces and try to get patches to figure out what kind of node does what to get an understanding how it works and what it changes.
So I’m already using a win 8 machine and had a lot of trouble home grown by the user to get it to work.
The problem with arduino comes from prototyping interfaces, I wrote them an e-mail and get the answer that it won’t work with the latest vvvv installer. I have to try it now with vvvv.29.2 and send a feedback to them.
Thanks a lot. I will see what I can get the next days and give a feedback.
Why would you need Windows 8? I have a Win7 machine with Kinect SDK and everything works fine. But: If you wanna connect more than one Kinect to one PC every Kinect has to go to a different USB Controller, different ports on one controller dont work… AND USB 3.0 ports won’t work either…
I get the particlesystem added to the kinect patch. But have the same problems like days before. The rotation of the system comes from an LFO at the camera itself. I can’t find any node to let the particlesystem rotate or change the coordinates. Is it only possible to change it by the camera?
I try to insert some of the described nodes above (pipet and trautner) for standalone use the helppatch works but how can i place it into the kinect patch. The only result was a blackscreen and I think this is not what I was looking for. I want to get something like the attached picture but have no idea what kind of node I have to use. It’s really hard to find the correct nodes if you didn’t know their names. If I use the Depth node the only result is a black screen in patch and helppatch too.
There is no reason to get a color mapped depth. I want a clean background and only the tracked persons in the renderwindow. At the Moment I become the skeleton of one person tracked. Now what I try to get is more then one skeleton (in the second post ksp wrote the amount of tracked persons is 2) i’ve seen several videos with more than 2 persons tracked( i know they are created wit vvvv), so I thing it musst be possible. Next, I would like to create a beautiful effect to with sound and some kind of changing cubes, quads or something depending on the position and movement of the people.
Which node changes the color of the joints according to the distance and why comes the distance effect not on patch startup but after a certain time or sometimes doesn’t start? Is it possible that there is a bug in it?
Sorry, I can’t find the node that changes these for me.
I would like to get something like this (sorry for bad quality its a snipping from a video). So this is made outdoor I have no idea if this still works in a room like I have intend. Is it possible to create something like this with trautner or is there any other patch that can handle this I think its a combination of trautner and some kind of particle effects. Furthermore I think this is not made with the skeleton tracking node. At the moment I try to get some “non-rgb” information from the kinect but theres still a black screen for me. I played around with the Kinect(devices Microsoft) help-patch but there’s no output only the default camera brings videosignal. If I change nodes like replacing the rgb with the detph node nothing happens.
there is no color change according to distance that i can see. (the bones are white, the joints are red)
if you are not getting any other data from the kinect nodes, make sure that you have enabled colour and/or depth on the main kinect node. eg. Player node wont give anything out without the depth enabled
…also you have a ‘toggle’ attached to the Update ‘bang’ of the sprites