Hello vvvvorum,
I’d like to have a video processed with what I believe is called ‘spatial mapping’ - the video frame would be read and remapped according to another (map) image. For example, if the first pixel of the MAP image had R=32 and G=0, it would get the pixel color from x=32 and y=0 from the INPUT video frame. I want to do this to create a kind of a wrap, where the video ‘rectangle’ is transformed in a circular pattern, like a donut (check attached image).
This is no regular displacement mapping or convolution filtering, as the mapping would be a bit more arbitrary.
I’ve built this with Max/MSP/Jitter using one of their own examples (“SpatialMapping”) and it worked pretty well. Now, even if the result image is limited to 256x256 pixels of precision (it would from the wayI’m doing it), it would be good enough for me as it’s blurred afterwards.
Anyhow, I tried building it with FreeFrame, but apparently you can only have one frame input with it, even if there’s a NUM_INPUTS constant on the code and a few functions to get the number of inputs; and I didn’t want to recalculate my map on every frame to make it faster, nor use a fixed image, hence why I’d like using an image as the input from vvvv itself.
So, any suggestion on this can be done - with freeframe (how to use two inputs) or otherwise, “natively” inside vvvv?
- Zeh
PS. Sorry for posting as a guest for the third time, but apparently the vvvv forum is having problems sending me emails. I’ll fix it later.
_example_spatial.png (10.6 kB)