Reading ArUco Markers and AprilTags
Learn how to read fiduciary markers with OpenCV and Python using a camera.
We'll cover the following...
The Python code we’ll examine is designed to load an image containing ArUco markers, detect these markers, estimate their pose, and then visualize the detected markers and their pose in 3D.
We’ll discuss two versions of the same code. In the first version, we’ll read markers without using a calibrated camera. In the second version, we’ll use a calibrated camera.
Whether we need to use a calibrated camera or not depends on our specific use case.
Press + to interact
Reading fiduciary markers with an uncalibrated camera
In the first example, we’ll discuss how to read markers using an uncalibrated camera.
Press + to interact
import cv2import numpy as npdef loadSyntheticCalibrationData():width = 640height = 480focal_length = 50 // 0.003camera_matrix = np.array([[focal_length, 0, width//2],[0, focal_length, height//2],[0, 0, 1]])distortion_coeffs = np.zeros([1,5])return camera_matrix, distortion_coeffsdef preprocessImage(image):processed_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)return processed_imagedef detectMarkers(gray_image, marker_dict, detector_params):corners, ids, _ = cv2.aruco.detectMarkers(gray_image, marker_dict, parameters=detector_params)return corners, idsdef poseEstimation(corners, marker_length, camera_matrix, distortion_coeffs):rvecs, tvecs, _ = cv2.aruco.estimatePoseSingleMarkers(corners, marker_length, camera_matrix, distortion_coeffs)return rvecs, tvecsdef visualization(image, corners, ids, camera_matrix, distortion_coeffs, rvecs, tvecs, marker_length):image_markers = cv2.aruco.drawDetectedMarkers(image.copy(), corners, ids)if ids is not None:for i in range(len(ids)):image_markers = cv2.drawFrameAxes(image_markers, camera_matrix, distortion_coeffs,rvecs[i], tvecs[i], marker_length)return image_markersimage_file = "./resources/marker_images/calibrate_20230614_155453.png"image = cv2.imread(image_file)camera_matrix, distortion_coeffs = loadSyntheticCalibrationData()processed_image = preprocessImage(image)marker_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_5X5_250)detector_params = cv2.aruco.DetectorParameters_create()detections, ids = detectMarkers(processed_image, marker_dict, detector_params)marker_length = .048rvecs, tvecs = poseEstimation(detections, marker_length, camera_matrix, distortion_coeffs)result = visualization(image, detections, ids, camera_matrix, distortion_coeffs, rvecs, tvecs, marker_length)cv2.imwrite(f"./output/marker_detection.png", result)
Lines 1–2: First, we import the ...