Use texture as input for OpenCV node

Hey there!

I am quite new to VVVV, and I am wondering if and how one can connect a node which outputs textures to a node which requires CVImageLink as input?

Does anybody know a handy solution?

I have a webcam video stream which I want to crop, so that only a certain region of the image will be processed later.
I have used VideoIn (OpenCV DirectShow) -> ImageQuad (OpenCV DX9) -> Crop (Ex9.Texture)
Now I want to process the stream with an OpenCV node again…

Best regards

with the last VVVV.Packs.Image and DX11
you can convert Texture to Image (CVImageLink)

hmm never knew this from contributions…
has the image pack something specific to do with dx11 ?
should the texture-to-cvimagelink not be better off in the image pack ?
is there a dx11-to-dx9 texture converter node?

it’s not (yet) in the contribution you have to compile the source
to have these new features
elliot also add some OpenCV nodes that use DX11 for the calibration process
so for the kinect in theory you can stick with the microsoft driver
using the AsImage (DX11)…
right now the texture sharing only work from Dx9 to Dx11
using FromSharedTexture (Dx11.texture 2d) with /dx9ex switch
not sure it’ll be possible the other way

@circuitb thanks for the info
@nickinzon you could get away with using captureproperty (opencv) to pan/zoom the videoin (opencv directshow), if your cam supports it

Thanks a lot for this hint! This works fine for me!

by the way i was wrong DX11->DX9 is possible:
AsImage (OpenCV DX11) -> AsTexture (OpenCV DX9)

i had a word with Elliot two days ago in moscow, he told me, that new release of image pack will be avalible in a week or so

I am sorry for interrupting and getting this thread back to life.
But: are there any news about the update of image pack? I know there is a current version available on elliot’s github account.
But when will there be a ready compiled one available at the contribution page? vvvv.packs.image

from what i know, if things go as planned, end of november. but you know how it is… :)