Please use this identifier to cite or link to this item: http://hdl.handle.net/10609/122766
Title: Aprendiendo de la memoria RAM de la NES
Author: Barajas Higuera, Daniel
Director: Ventura, Carles  
Tutor: Kanaan-Izquierdo, Samir  
Abstract: In this project, reinforcement learning algorithms are explored to play the games Donkey Kong, Ice Climber, Kung Fu, Super Mario Bros, and Metroid from the NES console. The Deep-Q learning algorithm is used to experiment with the RAM representation of the state. DQN algorithm extensions as Double DQN and Dueling Network Architectures are explored too. Some simple strategies to reduce the state-space and action-space are proposed in addition to reward functions to create an easy to train agent with low computational resources. Two ways to reduce the RAM representation of the state were tested, RAM map method worked well just in the training phase meanwhile the activated bytes method get better results also in gameplay.
Keywords: RAM
deep Q-learning
NES
Document type: info:eu-repo/semantics/masterThesis
Issue Date: 15-Sep-2020
Publication license: http://creativecommons.org/licenses/by-nc-nd/3.0/es/  
Appears in Collections:Bachelor thesis, research projects, etc.

Files in This Item:
File Description SizeFormat 
dbarajasTFM0920memoria.pdfMemoria del TFM12,3 MBAdobe PDFThumbnail
View/Open