Veuillez utiliser cette adresse pour citer ce document : http://dspace1.univ-tlemcen.dz/handle/112/2455
Titre: Visual Navigation using Adaptive Learning
Auteur(s): Berrabah, Sid Ahmed
Mots-clés: Mobile robots
Adaptive learning
Motion Segmentation
Motion Estimation
Simultaneous Localization and Mapping
Fuzzy Logic
Reinforcement Learning
Date de publication: 2012
Résumé: In this Thesis we present a navigation solution for a mobile robot. The proposed system uses vision input to enable the robot to build a feature based map of its environment, localize efficiently itself without any artificial markers or other modification, detect and track the moving objects, and navigate without colliding with obstacles. The Simultaneous Localization and Mapping (SLAM) process is tackled as a stochastic problem using Extended Kalman Filter (EKF). Our contribution consists in building a global map of the environment based on several local maps. The SIFT features are used in our implementation and their descriptors are used as a matching constraint. The 3D initialization of the features is based on the visual geometry theory. To avoid using outlier features, a motion segmentation and estimation (MSE) process is used to detect the moving part of the scene. Subsequently, during map building, the detected features on the moving parts are excluded. The MSE process consists in camera motion estimation and compensation, scene cut or strong camera motion detection, background Gaussian Mixture Model (GMM)model update, and a Maximum a Posteriori Probability Markov Random Field (MAPMRF) framework to detect the moving objects in the scene and estimate their motion. Two methods for camera motion estimation and compensation are used, one uses the 2D projection of the 3D motion estimated in the SLAM process and the other method uses the dense motion analysis proposed by Dufaux and Konrad. We considered also the case, where the robot is equipped with other localization sensors such as inertial navigation system (INS), wheel encoders, and a global positioning system (GPS). Two solutions are considered. The first consists in integrating the data from the INS and encoders data in the dynamic model of the vehicle to estimate its motion in the SLAM process and using the GPS data for geo-localizing the robot and the built map. The second solution consists in using the data from the INS and encoders in a Kalman filter to correct the GPS data, the estimated linear and angular velocities by this filter are used as prediction in the SLAM filter and the output of this one are used to update the dynamics estimation in the integration filter. This solution will increase the accuracy and the robustness of the positioning during the outage of the GPS system and allows a SLAM in less featured environments. The estimated map is used in a path planning process to generate a list of waypoints allowing the robot to reach the user defined goals. The robot uses then a navigation process to follow the planned path and avoiding the unplanned obstacles or moving objects in the scene. For this procedure, added to vision, infrared and ultrasound sensors are used for obstacles detection. The navigation procedure is based on two fuzzy logic controllers: a goal seeking controller and an obstacle avoidance controller. The goal seeking controller tries to find the path to the intermediate waypoints, while the obstacle avoidance controller has for mission to avoid obstacles. The 3D position of the moving obstacles is estimated using the epipolar geometry applied to the detected features on the moving objects. A command fusion scheme based on a conditioned activation for each controller arbitrates between the two behaviors. The reinforcement learning algorithm is used to adapt the obstacle avoidance fuzzy controller.
URI/URL: http://dspace.univ-tlemcen.dz/handle/112/2455
Collection(s) :Doctorat en Electronique

Fichier(s) constituant ce document :
Fichier Description TailleFormat 
visual-navigation-using.pdf13,4 MBAdobe PDFVoir/Ouvrir


Tous les documents dans DSpace sont protégés par copyright, avec tous droits réservés.