April 30, 2024

A new paper by Dr. Geovanni Martinez, coordinator of the IPCV-LAB, on Real-Time Robot 3D Pose Computation from NIR Imagery and ToF Depth Maps for Space Applications, has been published in the Springer book series Lecture Notes in Networks and Systems (LNNS), volume 965:


Geovanni Martinez, “Real-time robot 3D pose computation from NIR imagery and ToF depth maps for space applications”, Trends and Challenges in Multidisciplinary Research for Global Sustainable Development, Book series: Lecture Notes in Networks and Systems, vol 965, Springer, Cham, pp. 15–27, 2024.




This paper presents an algorithm capable of determining the three-dimensional position and orientation (3D pose) of an exploration robot from the processing of two multidimensional signals, a monocular near-infrared (NIR) video signal and a time-of-flight (ToF) depth signal, both provided by a monocular NIR ToF camera rigidly attached to the side of a robot facing the terrain. It is shown that if the depth signal is also considered during processing, it is possible to accurately determine the 3D pose of the robot, even on irregular terrain. The 3D pose is calculated by integrating the frame-to-frame robot 3D motion over time using composition rules, where the frame-to-frame robot 3D motion is estimated by minimizing a linear photometric error by applying an iterative maximum likelihood estimator. Hundreds of experiments have been conducted over rough terrain, obtaining excellent absolute position and orientation errors of less than 1 percent of the distance and angle traveled, respectively. This good performance is mainly due to the algorithm’s more accurate knowledge of the depth provided by the monocular NIR ToF camera. The algorithm runs in real time and can process up to 50 fps at VGA resolution on a conventional laptop computer.


Keywords: Ego-Motion Estimation, Visual Odometry, Visual Based Navigation, Planetary Robots, Space Exploration