...

/

Reading ArUco Markers and AprilTags

Reading ArUco Markers and AprilTags

Learn how to read fiduciary markers with OpenCV and Python using a camera.

The Python code we’ll examine is designed to load an image containing ArUco markers, detect these markers, estimate their pose, and then visualize the detected markers and their pose in 3D.

We’ll discuss two versions of the same code. In the first version, we’ll read markers without using a calibrated camera. In the second version, we’ll use a calibrated camera.

Whether we need to use a calibrated camera or not depends on our specific use case.

Press + to interact

Reading fiduciary markers with an uncalibrated camera

In the first example, we’ll discuss how to read markers using an uncalibrated camera.

Press + to interact
import cv2
import numpy as np
def loadSyntheticCalibrationData():
width = 640
height = 480
focal_length = 50 // 0.003
camera_matrix = np.array([[focal_length, 0, width//2],[0, focal_length, height//2],[0, 0, 1]])
distortion_coeffs = np.zeros([1,5])
return camera_matrix, distortion_coeffs
def preprocessImage(image):
processed_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
return processed_image
def detectMarkers(gray_image, marker_dict, detector_params):
corners, ids, _ = cv2.aruco.detectMarkers(gray_image, marker_dict, parameters=detector_params)
return corners, ids
def poseEstimation(corners, marker_length, camera_matrix, distortion_coeffs):
rvecs, tvecs, _ = cv2.aruco.estimatePoseSingleMarkers(corners, marker_length, camera_matrix, distortion_coeffs)
return rvecs, tvecs
def visualization(image, corners, ids, camera_matrix, distortion_coeffs, rvecs, tvecs, marker_length):
image_markers = cv2.aruco.drawDetectedMarkers(image.copy(), corners, ids)
if ids is not None:
for i in range(len(ids)):
image_markers = cv2.drawFrameAxes(image_markers, camera_matrix, distortion_coeffs,
rvecs[i], tvecs[i], marker_length)
return image_markers
image_file = "./resources/marker_images/calibrate_20230614_155453.png"
image = cv2.imread(image_file)
camera_matrix, distortion_coeffs = loadSyntheticCalibrationData()
processed_image = preprocessImage(image)
marker_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_5X5_250)
detector_params = cv2.aruco.DetectorParameters_create()
detections, ids = detectMarkers(processed_image, marker_dict, detector_params)
marker_length = .048
rvecs, tvecs = poseEstimation(detections, marker_length, camera_matrix, distortion_coeffs)
result = visualization(image, detections, ids, camera_matrix, distortion_coeffs, rvecs, tvecs, marker_length)
cv2.imwrite(f"./output/marker_detection.png", result)

Lines 1–2: First, we import the ...