Forum

How to detect some defined images in video stream

For uni i want to create an education game. People can put some cards (showing a picture of a family, a dog, etc) on the table. A overhead camera looks at the table.

I would like to get a spread of all IDs (cards on the table) in vvvv. After this, no problem.

The cards are pre-defined (10 pcs) and I have them as png images.

Tested with VL.OpenCV MarkerDetector which is good BUT aruco/fiducial trackers are not an option here. Could the images themselves be used as markers?

DetectObject + training a custom haarcascade for 10 images seems overkill. Would you do it like this?

appreciate the help
silas

Hi @otr789, what you are looking for I think is called “feature detection” and would allow you to “recognize” a set of features in an original image within a second image. This exists in OpenCV but has not been implemented in VL.OpenCV as far as I recall.

I can’t imagine bringing the functionality in to be a huge undertaking but I have no time to get into it right now. If you are adventurous you are welcome to look under the hood at how some other nodes are implemented and have a go at adding this yourself.

I can assist you in the process to a degree but am quite busy with work so mileage may vary.

Another option would be to use (invisible) aruco or similar markers with an IR reflective material and to use an IR camera, this would make the markers invisible to the user but available to the CV system.

1 Like

VL.Yolo.GPU you could try Yolo and see if that works for?

1 Like

Adding my two (VL) cents here : Lobe claim their app will be able to do object detection “soon” (scroll down to Project Templates here). I guess it’ll work a easily as what they have right now for image classification : give a set of image, label them, wait for the model to train, and run. When this is released, I could update the VL.Lobe nuget to support this feature. I believe this should be usable in beta as a VL plugin.

It also seems you can train your own Recognition model in RunwayML. I’ve never tried this feature so far but I believe you would then be able to use your train model as a normal one in vvvv (either remotely or locally). I can try to give it a spin in the next days. Also, I’ve never tried the VL.RunwayML nuget in beta, so cannot confirm it’ll work for you :)

1 Like

Thank you for the good insights, everyone! Very helpful.

I’d love to have a go at the implementation of “feature detection” myself but I’m more or less still a beginner.

I will check out VL.Yolo and also I’ve been looking for a reason to dive into RunwayML for some time now. Hopefully the old-ish computer will be able to handle the necessary power.

The process to train a model using images looks straight-foward!

Going a “hidden route” with IR camera and IR reflective material also seems like a good solution, will have to do some research about this. Thank you for pointing me in this direction.

Back again! I’ve had some success with training own recognitions models, RunwayML looks very promising but not being able to run models locally on a Windows PC is a dealbreaker for now.

@sebescudie: Thank you for the great Lobe.ai contribution :) If object detection was ready and being able to use a model without having the app ifself open, this would 100% be the perfect fit. Very easy to train and use.

I’m a bit stuck right now, also with limited non-cuda hardware.

My next step would be learning about custom HAARCascade (doesn’t feel so overkill right now anymore) files.

If anybode else has another idea I’d greatly appreciate it!

1 Like

I haven’t tried and did not find anything concrete but maybe it’s possible to run runwayml inside WSL2?