I am working on live performance with my friend that is based on controling samplers in ableton from vvvv with vr. Honestly I hate hacking ableton like this because its much more easier to make something in reaktor and vvvvaudio, but my friend is working in ableton so this is where we are. I have to mess around with python control surfaces and calculate sample position from midi clock, its kinda meh. This is where I am at:
Basic idea is that there are activable objects in environment in groups. Group is one track in ableton, and when you activate an object, it will play the sample and give you control over it.
I have a question about architecture considering vvvv and vl. VL is doing the sampler control part. I feed values into it with binsizes so i can easy make multiple samplers. So i feed in 10 positions with binsize set to 5 and ill get two working samplers.
Problem is however raycasting. I want to always raycast only the closest object and turn rest of the raycasting off while its colliding. for example:
you want to grab the ground and move the whole world, google earth vr style, for movement. Intersectiong ray with and object with ray would activate it while doing this.
Or you are controlling sampler on one group of samples, and there is another object from different group (track) somewhere behind, you could trigger it while doing this.
I was thinking feeding back the raycast info in vvvv with framedelay to vl, but this seems very messy, I already have one (order by distance, choose closest for raycasting) on objects. Can instances of vl plugin that are binsized talk to each other inside vl, or is it better to make some binsizing thing inside vl?
I think so far this is the only thing that is not working out so much with this vvvv binsize approach. I can work around it but i would love to know how to do this proper as I would like to also put it out as contrib.