Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10609/148534
Título : Collaborative Spatial Reuse in wireless networks via selfish Multi-Armed Bandits
Autoría: Wilhelmi Roca, Francesc  
Cano, Cristina  
Neu, Gergely  
Bellalta, Boris  
Jonsson, Anders  
Barrachina-Muñoz, Sergio  
Citación : Wilhelmi, F. [Francesc]. Cano, C. [Cristina ]. New, G. [Gergely]. Bellalta, B. [Boris]. Jonsson, A. [Anderss]. Barrachina-Muñoz, S. [Sergio]. (2019). Collaborative Spatial Reuse in wireless networks via selfish Multi-Armed Bandits, Ad Hoc Networks, Volume 88, Pages 129-141, ISSN 1570-8705.
Resumen : Next-generation wireless deployments are characterized by being dense and uncoordinated, which often leads to inefficient use of resources and poor performance. To solve this, we envision the utilization of completely decentralized mechanisms to enable Spatial Reuse (SR). In particular, we focus on dynamic channel selection and Transmission Power Control (TPC). We rely on Reinforcement Learning (RL), and more specifically on Multi-Armed Bandits (MABs), to allow networks to learn their best configuration. In this work, we study the exploration-exploitation trade-off by means of the ε-greedy, EXP3, UCB and Thompson sampling action-selection, and compare their performance. In addition, we study the implications of selecting actions simultaneously in an adversarial setting (i.e., concurrently), and compare it with a sequential approach. Our results show that optimal proportional fairness can be achieved, even when no information about neighboring networks is available to the learners and Wireless Networks (WNs) operate selfishly. However, there is high temporal variability in the throughput experienced by the individual networks, especially for ε-greedy and EXP3. These strategies, contrary to UCB and Thompson sampling, base their operation on the absolute experienced reward, rather than on its distribution. We identify the cause of this variability to be the adversarial setting of our setup in which the set of most played actions provide intermittent good/poor performance depending on the neighboring decisions. We also show that learning sequentially, even if using a selfish strategy, contributes to minimize this variability. The sequential approach is therefore shown to effectively deal with the challenges posed by the adversarial settings that are typically found in decentralized WNs.
Palabras clave : high-density wireless networks
interference
decentralized learning
multi armed bandits
resource allocation
DOI: https://doi.org/10.1016/j.adhoc.2019.01.006
Tipo de documento: info:eu-repo/semantics/article
Versión del documento: info:eu-repo/semantics/acceptedVersion
Fecha de publicación : 15-may-2018
Licencia de publicación: https://creativecommons.org/licenses/by-nc-nd/4.0/  
Aparece en las colecciones: Articles cientÍfics
Articles

Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
Wilhelmi_adn_coll_merged.pdf2,46 MBAdobe PDFVista previa
Visualizar/Abrir