OpenCV: Estimate camera position from QR / Aruco / Fiductial

Hello, I am evaluating doing a project that involves something like a Plotter with X Y Z movement ( no rotation) and a camera pointing to a surface in witch I will be able to place either QR/Aruco/Fiductial trackers, but not sure what is the best for this job. The idea is to estimate camera position against the viewing surface, in which I can place as many trackers as I want

I can see that fiducial don´t provide any size/distance data, just position and rotation… maybe having at least 2 of them on screen at the same time and estimate distance between ?

Aruco seems the best way so far, because its provides all the data that I need, but I was not able to track 2 at the same time.

Same with QR…

Any advice on how to approach this problem ?

Hello @andresc4!

It’s a shame that you cannot detect multiple markers at once, I thought that worked and will try to have a look.

However, why do you need to detect multiple if all you need in the end is the camera position? I would have a single dedicated marker In a known position, and use that as the reference to estimate the camera position/rotation. You shouldn’t need more unless I am not understanding your use-case properly.

Hey, great if you can give it a try, maybe I did something wrong…
Using one marker could be a solution

The project itself would be “like” a robot, moving trough a huge wall, a motor will move it on X, another motor on the Y position, and if this works, a 3rd motor will bring a head closer to the wall.
Imagine that you have a huge wall with a lot of lockers on a grid, lets say you have 10 meters wall , 20cm lockers, 50 lockers on X, and on the Y axes you have 2 meters, 10 more, so you have a final grid of 50x10
The idea would be to give the robot the instruction to go to a specific box

I would use a encoder on each motor axis, and a end stop on each side to know if I am on 0x or 99999x ( whatever is the step count of the encoder at the end of the position )

I should be able to solve the whole system using just encoders, but I would like to evaluate the possibility to no use them, and just rely on trackers… good thing about this, is that I can simulate the whole process without building nothing

But on this simulation, the camera looking at the locker, depending on the lens used, and on how far the robot will be, will have the chance to view 1 or 2, or maybe 4 markers at the same time, or maybe if I am close to the wall, for moment, don’t see nothing, and I will have to “estimate” my movement for the period of time without any tracker present… that’s something, that without encoders, will be hard to achieve… and also will be erratic if I am not able to see 2 at the same time

Hey Andres,
I hope I’m getting it right!

This patch is getting the distance between two Aruco markes.
I have used it very recently to calibrate a robot’s workspace dynamicly.
Make sure you callibrate your camera and load the relavant intrinsics file.

GetThe distance between 2 Aruco markers.vl (123.9 KB)

Hello Amir, everything great here, nice to hear from you! I saw many amazing project that you are working on, great!

I am surprised to see that your example works with many markers, I will try to do the same as I did before to figure out where I mess it up. But your patch is just what I was looking for, I will go with this option, as I can get the exact position of each, and estimate camera distance from 2 or more! Thanks

1 Like