News

A new paper by Dr. Geovanni Martinez, coordinator of the IPCV-LAB, on Real-Time Robot 3D Pose Computation from NIR Imagery and ToF Depth Maps for Space Applications, has been published in the Springer book series Lecture Notes in Networks and Systems (LNNS), volume 965:

 

Geovanni Martinez, “Real-time robot 3D pose computation from NIR imagery and ToF depth maps for space applications”, Trends and Challenges in Multidisciplinary Research for Global Sustainable Development, Book series: Lecture Notes in Networks and Systems, vol 965, Springer, Cham, pp. 15–27, 2024.

 

Abstract

 

This paper presents an algorithm capable of determining the three-dimensional position and orientation (3D pose) of an exploration robot from the processing of two multidimensional signals, a monocular near-infrared (NIR) video signal and a time-of-flight (ToF) depth signal, both provided by a monocular NIR ToF camera rigidly attached to the side of a robot facing the terrain. It is shown that if the depth signal is also considered during processing, it is possible to accurately determine the 3D pose of the robot, even on irregular terrain. The 3D pose is calculated by integrating the frame-to-frame robot 3D motion over time using composition rules, where the frame-to-frame robot 3D motion is estimated by minimizing a linear photometric error by applying an iterative maximum likelihood estimator. Hundreds of experiments have been conducted over rough terrain, obtaining excellent absolute position and orientation errors of less than 1 percent of the distance and angle traveled, respectively. This good performance is mainly due to the algorithm’s more accurate knowledge of the depth provided by the monocular NIR ToF camera. The algorithm runs in real time and can process up to 50 fps at VGA resolution on a conventional laptop computer.

 

Keywords: Ego-Motion Estimation, Visual Odometry, Visual Based Navigation, Planetary Robots, Space Exploration

 

 

Juan Ignacio Montealegre received today his bachelor's degree in electrical engineering with honors. He carried out his final project at IPCV-LAB. He was developing an algorithm for plane segmentation from point clouds using functions of the Point Cloud Library under ubuntu and ROS operating systems, where the point clouds were generated by a Microsoft Xbox Kinect V2. Congratulations!

Dr. Geovanni Martinez's paper accepted for oral presentation at ICASAT-2023 was awarded as the second best paper of the conference.

TitleReal-Time Robot 3D Pose Computation from NIR Imagery and ToF Depth Maps for Space Applications

AbstractThis paper presents an algorithm capable of determining the three-dimensional position and orientation (3D pose) of an exploration robot from the processing of two multidimensional signals, a monocular near-infrared (NIR) video signal and a time-of-flight (ToF) depth signal, both provided by a monocular NIR ToF camera rigidly attached to the side of a robot facing the terrain. It is shown that if the depth signal is also considered during processing, it is possible to accurately determine the 3D pose of the robot, even on irregular terrain. The 3D pose is calculated by integrating the frame-to-frame robot 3D motion over time using composition rules, where the frame-to-frame robot 3D motion is estimated by minimizing a linear photometric error by applying an iterative maximum likelihood estimator. Hundreds of experiments have been conducted over rough terrain, obtaining excellent absolute position and orientation errors of less than 1 percent of the distance and angle traveled, respectively. This good performance is mainly due to the algorithm's more accurate knowledge of the depth provided by the monocular NIR ToF camera. The algorithm runs in real time and can process up to 50 fps at VGA resolution on a conventional laptop computer.

Dr. Geovanni Martinez gave a talk at ICASAT-2023 entitled "Real-Time Robot 3D Pose Computation from NIR Imagery and ToF Depth Maps for Space Applications". In the talk he described an algorithm developed at the IPCV-LAB capable of determining the three-dimensional position and orientation (3D pose) of an exploration robot from the processing of two multidimensional signals, a monocular near-infrared (NIR) video signal and a time-of-flight (ToF) depth signal, both provided by a monocular NIR ToF camera rigidly attached to the side of a robot facing the terrain. The 3D pose is calculated by integrating the frame-to-frame robot 3D motion over time using composition rules, where the frame-to-frame robot 3D motion is estimated by minimizing a linear photometric error by applying an iterative maximum likelihood estimator. Hundreds of experiments have been conducted over rough terrain, obtaining excellent absolute position and orientation errors of less than 1 percent of the distance and angle traveled, respectively. This good performance is mainly due to the algorithm's more accurate knowledge of the depth provided by the monocular NIR ToF camera. The algorithm runs in real time and can process up to 50 fps at VGA resolution on a conventional laptop computer. 

 

Andres Brenes received today his bachelor's degree in electrical engineering. He carried out his final project at IPCV-LAB. He was developing an algorithm for the detection, description and tracking of ORB feature points in image sequences captured by a Microsoft Xbox Kinect V2 under ubuntu and ROS operating systems. Congratulations!