TY - GEN
T1 - Deep reinforcement learning approach to QoE-Driven resource allocation for spectrum underlay in cognitive radio networks
AU - Shah-Mohammadi, Fatemeh
AU - Kwasinski, Andres
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/3
Y1 - 2018/7/3
N2 - This paper presents a deep reinforcement learning-based technique for cognitive radio underlay dynamic spectrum access (DSA) that performs distributed joint multi-resource allocation to satisfy the primary link interference constraint and to maximize the secondary network performance, measured through the Mean Opinion Score (MOS) metric. The use of MOS as performance metric enables seamless integrated resource allocation of dissimilar traffic. The resource allocation problem is solved by utilizing a Deep Q- Network (DQN) algorithm, an advanced deep reinforcement learning approach, and a neural network to approximate the Q action-value function. Moreover, the learning process is improved by incorporating transfer learning to the learning procedure. Simulation results show that transfer learning reduces the number of iterations for convergence by approximately 25% and 72% compared to the DQN- algorithm without utilizing transfer learning and standard Q- learning, respectively.
AB - This paper presents a deep reinforcement learning-based technique for cognitive radio underlay dynamic spectrum access (DSA) that performs distributed joint multi-resource allocation to satisfy the primary link interference constraint and to maximize the secondary network performance, measured through the Mean Opinion Score (MOS) metric. The use of MOS as performance metric enables seamless integrated resource allocation of dissimilar traffic. The resource allocation problem is solved by utilizing a Deep Q- Network (DQN) algorithm, an advanced deep reinforcement learning approach, and a neural network to approximate the Q action-value function. Moreover, the learning process is improved by incorporating transfer learning to the learning procedure. Simulation results show that transfer learning reduces the number of iterations for convergence by approximately 25% and 72% compared to the DQN- algorithm without utilizing transfer learning and standard Q- learning, respectively.
UR - https://www.scopus.com/pages/publications/85050288966
U2 - 10.1109/ICCW.2018.8403658
DO - 10.1109/ICCW.2018.8403658
M3 - Conference contribution
AN - SCOPUS:85050288966
T3 - 2018 IEEE International Conference on Communications Workshops, ICC Workshops 2018 - Proceedings
SP - 1
EP - 6
BT - 2018 IEEE International Conference on Communications Workshops, ICC Workshops 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE International Conference on Communications Workshops, ICC Workshops 2018
Y2 - 20 May 2018 through 24 May 2018
ER -