Veuillez utiliser cette adresse pour citer ce document : http://dspace1.univ-tlemcen.dz/handle/112/23064
Affichage complet
Élément Dublin CoreValeurLangue
dc.contributor.authorKoudad, Zoulikha-
dc.date.accessioned2024-09-23T12:33:59Z-
dc.date.available2024-09-23T12:33:59Z-
dc.date.issued2024-07-15-
dc.identifier.urihttp://dspace1.univ-tlemcen.dz/handle/112/23064-
dc.description.abstractThe hierarchical reinforcement learning framework breaks down the reinforcement learning problem into subtasks or extended actions called options in order to facilitate its resolution. Di erent models have been proposed where options were manually prede ned or semi-automatically discovered. However, the automatic discovery of options has become a real challenge for research in hierarchical reinforcement learning. In this thesis we propose two automatic option discovery method for hierarchical reinforcement learning. The rst method that we call FAOD (Fast Automatic Option Discovery). In this contribution, we took inspiration from robot learning methods to categorize the sensorimotor ow during navigation. Thus, FAOD agent moves along the walls to discover the rooms' contour, closed spaces, doors and bottleneck regions to de ne terminate states and initiation sets for options. In the second contribution our learning agent uses his sense of direction to discover the shortest paths and shortcuts after an exploration based on intrinsic motivation, without resorting to the algorithms of the graph theory, these discoveries subsequently serve to discover the termination conditions and the initiation states of the options. For the learning of options policies, the agent uses his experience of exploration as well as learning by temporal di erence strategy. We tested and validated this approach on di erent maze problems and on the tic-tac-toe game.en_US
dc.language.isoenen_US
dc.publisherUniversity of tlemcenen_US
dc.relation.ispartofseries761 Doct Informatique;-
dc.subjectHierarchical reinforcement learning; Reinforcement learning; Option Discovery; Markov decision process; Actor-criticlearning, Way nding; Intrinsic motivation.en_US
dc.titleMethods for Automatic Option Discovery in Hierarchical Reinforcement Learningen_US
dc.typeThesisen_US
Collection(s) :Doctorat Classique SIC

Fichier(s) constituant ce document :
Fichier Description TailleFormat 
Methods_for_Automatic_Option_Discovery_in.pdf3,3 MBAdobe PDFVoir/Ouvrir


Tous les documents dans DSpace sont protégés par copyright, avec tous droits réservés.