Adaptive Cache Policy Optimization Through Deep Reinforcement Learning in Dynamic Cellular Networks
Loading...
Access rights
openAccess
publishedVersion
URL
Journal Title
Journal ISSN
Volume Title
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
This publication is imported from Aalto University research portal.
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
View publication in the Research portal (opens in new window)
View/Open full text file from the Research portal (opens in new window)
Other link related to publication (opens in new window)
Date
2024
Major/Subject
Mcode
Degree programme
Language
en
Pages
19
Series
Intelligent and Converged Networks, Volume 5, issue 2, pp. 81-99
Abstract
We explore the use of caching both at the network edge and within User Equipment (UE) to alleviate traffic load of wireless networks. We develop a joint cache placement and delivery policy that maximizes the Quality of Service (QoS) while simultaneously minimizing backhaul load and UE power consumption, in the presence of an unknown time-variant file popularity. With file requests in a time slot being affected by download success in the previous slot, the caching system becomes a non-stationary Partial Observable Markov Decision Process (POMDP). We solve the problem in a deep reinforcement learning framework based on the Advantageous Actor-Critic (A2C) algorithm, comparing Feed Forward Neural Networks (FFNN) with a Long Short-Term Memory (LSTM) approach specifically designed to exploit the correlation of file popularity distribution across time slots. Simulation results show that using LSTM-based A2C outperforms FFNN-based A2C in terms of sample efficiency and optimality, demonstrating superior performance for the non-stationary POMDP problem. For caching at the UEs, we provide a distributed algorithm that reaches the objectives dictated by the agent controlling the network, with minimum energy consumption at the UEs, and minimum communication overhead.Description
Publisher Copyright: © 2020 Tsinghua University Press.
Keywords
advantageous actor critic, deep reinforcement learning, long short term memory, non-stationary Partial Observable Markov Decision Process (POMDP), wireless caching
Other note
Citation
Srinivasan, A, Amidzade, M, Zhang, J & Tirkkonen, O 2024, ' Adaptive Cache Policy Optimization Through Deep Reinforcement Learning in Dynamic Cellular Networks ', Intelligent and Converged Networks, vol. 5, no. 2, pp. 81-99 . https://doi.org/10.23919/ICN.2024.0007