News

The paper written by Dr. Geovanni Martínez entitled "Experimental results of testing a direct monocular visual odometry algorithm outdoors on flat terrain under severe global illumination changes for Planetary Exploration Rovers", was accepted for publication in Computación y Sistemas, an International Journal of Computing Science and Applications. Computación y Sistemas is a peer reviewed open access scientific journal of Computer Science and Engineering. The journal is indexed in the CONACYT Index of Excellence of Mexican JournalsScopusWeb of Science (core collection-emerging sources), RedalycE-JournalREDIBLatindexBiblatPeriodicaDBLP, and SciELO (part of Web of Science).

IPCV-LAB acquired 2 TurtleBots to be used in the IE-0449 Computer Vision course.

 

These two turtleBots will be used in the IE-0449 Computer Vision course, which will be taught by Dr. Geovanni Martínez during the first semester of 2018, so that the applications for autonomous robotics developed by the students in the course can also be tested on real robotic platforms (not just in simulations!). Likewise, it will help students who collaborate in the Image Processing and Computar Vision Research Laboratory to familiarize themselves with autonomous robots first of low cost, before venturing into their projects to use the more expensive and sophisticated robots that the IPCV-LAB has for research, such as the Husky A200 and the Seerkur Jr.

 

A new calibration algorithm is being investigated by Dr. Geovanni Martinez to facilitate the comparison of the experimental results obtained by the monocular visual odometry algorithm developed in the IPCV-LAB with the ground truth positioning data obtained  by using a robotic theodolite with a laser range sensor.

 

 

The photo shows Dr. Geovanni Martinez testing part of the calibration algorithm in the IPCV-LAB with the help of a three-dimensional calibration pattern, the rover Seekur Jr. and a Trimbre S3 robotic total station.

 

The Image Processing and Computer Vision Research Laboratory (IPCV-LAB) was present in the Max Planck Science Tunnel, which is currently being exhibited in Costa Rica. Dr. Geovanni Martinez, coordinator of the IPCV-LAB, explained to visitors the importance of computer vision in autonomous robotics and carried out demonstrations of the Seekur Jr. robot, that he uses to test outdoors the new vision-based navigation systems, which are being investigated in the IPCV-LAB and that are particularly useful in planetary exploration.

 

During the event, Dr. Martinez allowed the children to teleoperate the Seekur Jr. robot:

 

 

 

 

 

Dr. Geovanni Martinez gave a Talk at TECCOM-2017. In the talk Dr. Martinez explained the Monocular Visual Odometry algorithm based on intensity differences, which has been developed at the IPCV-LAB, and compared it with the Stereoscopic Visual Odometry Algorithm based on Feature Correspondences, which is traditionally used in autonomous robotic and also in palnetary rovers. He described also the  experimental results of testing the monocular visual odometry algorithm in a real rover platform Husky A200 over flat terrain for localization in outdoor sunlit conditions.  

 

The Monocular Visual Odometry Algorithm computes the three-dimensional (3D) position of the rover by integrating its motion over time. The motion is directly estimated by maximizing a likelihood function that is the natural logarithm of  the conditional probability of intensity differences measured at different observation points between consecutive images. It does not require as an intermediate step to determine the optical flow or establish correspondences. The images are captured by a monocular video camera that has been mounted on the rover looking to one side tilted downwards to the planet's surface. Most of the experiments were conducted under severe global illumination changes. Comparisons with ground truth data have shown an average absolute position error of  0.9% of distance traveled with an average processing time per image of 0.06 seconds.

 

This is the link to the presentation:

 

"Monocualr Visual Odometry for Navigation Systems of Autonomous Planetary Robots", Geovanni Martinez, IPCV-LAB"

The IPCV-LAB recently acquired 6 batteries and 3 chargers for its Seekur Jr mobile robot. These will allow longer operating times and faster recharge, during the experiments being carried out outdoors, to validate the monocular visual odometry algorithm, which is being developed in the IPCV-LAB. In the photo is observed the Dr. Geovanni Martinez next to the new equipment received.

 

 

The experimental results of testing the monocular visual odometry algorithm developed by the IPCV-LAB in a real rover platform Husky A200 over flat terrain for localization in outdoor sunlit conditions were recently presented in IAPR MVA-2017.  The algorithm computes the three-dimensional (3D) position of the rover by integrating its motion over time. The motion is directly estimated by maximizing a likelihood function that is the natural logarithm of  the conditional probability of intensity differences measured at different observation points between consecutive images. It does not require as an intermediate step to determine the optical flow or establish correspondences. The images are captured by a monocular video camera that has been mounted on the rover looking to one side tilted downwards to the planet's surface. Most of the experiments were conducted under severe global illumination changes. Comparisons with ground truth data have shown an average absolute position error of  0.9% of distance traveled with an average processing time per image of 0.06 seconds.

See paper here (IEEE Xplore).

See presented poster in IAPR MVA-2017 here.

 

 

 

In 2017, Dr. Geovanni Martinez will begin the field tests on rough terrain of the monocular visual odometry algorithm developed in the IPCV-LAB. To this end, a MobileRobots Seekur Jr. all-terrain rover will be used, which is able to operate outdoors in any weather.

A new 3D Video Stabilization Algorithm has been developed by the IPCV-LAB to remove the camera jitter from image sequences captured by planetary robots. First, the frame to frame surface 3D motion with respect to the camera coordinate system is estimated and accumulated over time. The estimation is performed by maximizing a likelihood function of the frame to frame intensity differences measured at key observation points. Then, the jitter is determined as the perspective projection of the difference between the accumulated surface 3D translation and a smoothed version of it. Finally, the stabilized video is synthesized by moving the entire content of each image with a displacement vector having the same magnitude but opposite direction to the estimated jitter for that image. The experimental results with synthetic data revealed real time operation with low latency and a reduction of the jitter in a factor of 20. Experimental results with real image sequences captured by a rover platform in indoor and outdoor conditions show very reliable and encouraging stabilization results. 

 

A detailed description of the algorithm can be fould here.

On April 19th and 21st, 2016, at 2 pm, Dr. Geovanni Martinez demonstrated the RTK dGPS Outdoor Guidence System of the Seekur Jr. rover in the gardens of the School of Electrical Engineering for the UCR Technology Fair participants. They had also the opportunity to teleoperate the rover.

Pages