TY - GEN
T1 - Learning-based cell selection method for femtocell networks
AU - Dhahri, Chaima
AU - Ohtsuki, Tomoaki
PY - 2012/8/20
Y1 - 2012/8/20
N2 - In open-access non-stationary femtocell networks, cellular users (also known as macro users or MU) may join, through a handover procedure, one of the neighboring femtocells so as to enhance their communications/increase their respective channel capacities. To avoid frequent communication disruptions owing to effects such as the ping-pong effect, it is necessary to ensure the effectiveness of the cell selection method. Traditionally, such selection method is usually a measured channel/cell quality metric such as the channel capacity, the load of the candidate cell, the received signal strength (RSS), etc. However, one problem with such approaches is that present measured performance does not necessarily reflect the future performance, thus the need for novel cell selection that can predict the \textit{horizon}. Subsequently, we present in this paper a reinforcement learning (RL), i.e, Q- learning algorithm, as a generic solution for the cell selection problem in a non-stationary femtocell network. After comparing our solution for cell selection with different methods in the literature (least loaded (LL), random and capacity-based), simulation results demonstrate the benefits of using learning in terms of the gained capacity and the number of handovers.
AB - In open-access non-stationary femtocell networks, cellular users (also known as macro users or MU) may join, through a handover procedure, one of the neighboring femtocells so as to enhance their communications/increase their respective channel capacities. To avoid frequent communication disruptions owing to effects such as the ping-pong effect, it is necessary to ensure the effectiveness of the cell selection method. Traditionally, such selection method is usually a measured channel/cell quality metric such as the channel capacity, the load of the candidate cell, the received signal strength (RSS), etc. However, one problem with such approaches is that present measured performance does not necessarily reflect the future performance, thus the need for novel cell selection that can predict the \textit{horizon}. Subsequently, we present in this paper a reinforcement learning (RL), i.e, Q- learning algorithm, as a generic solution for the cell selection problem in a non-stationary femtocell network. After comparing our solution for cell selection with different methods in the literature (least loaded (LL), random and capacity-based), simulation results demonstrate the benefits of using learning in terms of the gained capacity and the number of handovers.
UR - http://www.scopus.com/inward/record.url?scp=84864979393&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84864979393&partnerID=8YFLogxK
U2 - 10.1109/VETECS.2012.6240208
DO - 10.1109/VETECS.2012.6240208
M3 - Conference contribution
AN - SCOPUS:84864979393
SN - 9781467309905
T3 - IEEE Vehicular Technology Conference
BT - IEEE 75th Vehicular Technology Conference, VTC Spring 2012 - Proceedings
T2 - IEEE 75th Vehicular Technology Conference, VTC Spring 2012
Y2 - 6 May 2012 through 9 June 2012
ER -