Empreu aquest identificador per citar o enllaçar aquest ítem: http://hdl.handle.net/10609/148534
Títol: Collaborative Spatial Reuse in wireless networks via selfish Multi-Armed Bandits
Autoria: Wilhelmi Roca, Francesc  
Cano, Cristina  
Neu, Gergely  
Bellalta, Boris  
Jonsson, Anders  
Barrachina-Muñoz, Sergio  
Citació: Wilhelmi, F. [Francesc]. Cano, C. [Cristina ]. New, G. [Gergely]. Bellalta, B. [Boris]. Jonsson, A. [Anderss]. Barrachina-Muñoz, S. [Sergio]. (2019). Collaborative Spatial Reuse in wireless networks via selfish Multi-Armed Bandits, Ad Hoc Networks, Volume 88, Pages 129-141, ISSN 1570-8705.
Resum: Next-generation wireless deployments are characterized by being dense and uncoordinated, which often leads to inefficient use of resources and poor performance. To solve this, we envision the utilization of completely decentralized mechanisms to enable Spatial Reuse (SR). In particular, we focus on dynamic channel selection and Transmission Power Control (TPC). We rely on Reinforcement Learning (RL), and more specifically on Multi-Armed Bandits (MABs), to allow networks to learn their best configuration. In this work, we study the exploration-exploitation trade-off by means of the ε-greedy, EXP3, UCB and Thompson sampling action-selection, and compare their performance. In addition, we study the implications of selecting actions simultaneously in an adversarial setting (i.e., concurrently), and compare it with a sequential approach. Our results show that optimal proportional fairness can be achieved, even when no information about neighboring networks is available to the learners and Wireless Networks (WNs) operate selfishly. However, there is high temporal variability in the throughput experienced by the individual networks, especially for ε-greedy and EXP3. These strategies, contrary to UCB and Thompson sampling, base their operation on the absolute experienced reward, rather than on its distribution. We identify the cause of this variability to be the adversarial setting of our setup in which the set of most played actions provide intermittent good/poor performance depending on the neighboring decisions. We also show that learning sequentially, even if using a selfish strategy, contributes to minimize this variability. The sequential approach is therefore shown to effectively deal with the challenges posed by the adversarial settings that are typically found in decentralized WNs.
Paraules clau: high-density wireless networks
interference
decentralized learning
multi armed bandits
resource allocation
DOI: https://doi.org/10.1016/j.adhoc.2019.01.006
Tipus de document: info:eu-repo/semantics/article
Versió del document: info:eu-repo/semantics/acceptedVersion
Data de publicació: 15-mai-2018
Llicència de publicació: https://creativecommons.org/licenses/by-nc-nd/4.0/  
Apareix a les col·leccions:Articles cientÍfics
Articles

Arxius per aquest ítem:
Arxiu Descripció MidaFormat 
Wilhelmi_adn_coll_merged.pdf2,46 MBAdobe PDFThumbnail
Veure/Obrir
Comparteix:
Exporta:
Consulta les estadístiques

Aquest ítem està subjecte a una llicència de Creative Commons Llicència Creative Commons Creative Commons