Learning-based cell selection method for femtocell networks

Chaima Dhahri, Tomoaki Ohtsuki

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Citations (Scopus)

Abstract

In open-access non-stationary femtocell networks, cellular users (also known as macro users or MU) may join, through a handover procedure, one of the neighboring femtocells so as to enhance their communications/increase their respective channel capacities. To avoid frequent communication disruptions owing to effects such as the ping-pong effect, it is necessary to ensure the effectiveness of the cell selection method. Traditionally, such selection method is usually a measured channel/cell quality metric such as the channel capacity, the load of the candidate cell, the received signal strength (RSS), etc. However, one problem with such approaches is that present measured performance does not necessarily reflect the future performance, thus the need for novel cell selection that can predict the \textit{horizon}. Subsequently, we present in this paper a reinforcement learning (RL), i.e, Q- learning algorithm, as a generic solution for the cell selection problem in a non-stationary femtocell network. After comparing our solution for cell selection with different methods in the literature (least loaded (LL), random and capacity-based), simulation results demonstrate the benefits of using learning in terms of the gained capacity and the number of handovers.

Original languageEnglish
Title of host publicationIEEE Vehicular Technology Conference
DOIs
Publication statusPublished - 2012
EventIEEE 75th Vehicular Technology Conference, VTC Spring 2012 - Yokohama, Japan
Duration: 2012 May 62012 Jun 9

Other

OtherIEEE 75th Vehicular Technology Conference, VTC Spring 2012
CountryJapan
CityYokohama
Period12/5/612/6/9

Fingerprint

Femtocell
Channel capacity
Cell
Channel Capacity
Handover
Communication
Reinforcement learning
Learning algorithms
Macros
Received Signal Strength
Q-learning
Reinforcement Learning
Cellular Networks
Join
Learning
Horizon
Learning Algorithm
Metric
Predict
Necessary

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Science Applications
  • Applied Mathematics

Cite this

Learning-based cell selection method for femtocell networks. / Dhahri, Chaima; Ohtsuki, Tomoaki.

IEEE Vehicular Technology Conference. 2012. 6240208.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Dhahri, C & Ohtsuki, T 2012, Learning-based cell selection method for femtocell networks. in IEEE Vehicular Technology Conference., 6240208, IEEE 75th Vehicular Technology Conference, VTC Spring 2012, Yokohama, Japan, 12/5/6. https://doi.org/10.1109/VETECS.2012.6240208
Dhahri, Chaima ; Ohtsuki, Tomoaki. / Learning-based cell selection method for femtocell networks. IEEE Vehicular Technology Conference. 2012.
@inproceedings{010c50e30c8c4d35a7e0382b8b7151c8,
title = "Learning-based cell selection method for femtocell networks",
abstract = "In open-access non-stationary femtocell networks, cellular users (also known as macro users or MU) may join, through a handover procedure, one of the neighboring femtocells so as to enhance their communications/increase their respective channel capacities. To avoid frequent communication disruptions owing to effects such as the ping-pong effect, it is necessary to ensure the effectiveness of the cell selection method. Traditionally, such selection method is usually a measured channel/cell quality metric such as the channel capacity, the load of the candidate cell, the received signal strength (RSS), etc. However, one problem with such approaches is that present measured performance does not necessarily reflect the future performance, thus the need for novel cell selection that can predict the \textit{horizon}. Subsequently, we present in this paper a reinforcement learning (RL), i.e, Q- learning algorithm, as a generic solution for the cell selection problem in a non-stationary femtocell network. After comparing our solution for cell selection with different methods in the literature (least loaded (LL), random and capacity-based), simulation results demonstrate the benefits of using learning in terms of the gained capacity and the number of handovers.",
author = "Chaima Dhahri and Tomoaki Ohtsuki",
year = "2012",
doi = "10.1109/VETECS.2012.6240208",
language = "English",
isbn = "9781467309905",
booktitle = "IEEE Vehicular Technology Conference",

}

TY - GEN

T1 - Learning-based cell selection method for femtocell networks

AU - Dhahri, Chaima

AU - Ohtsuki, Tomoaki

PY - 2012

Y1 - 2012

N2 - In open-access non-stationary femtocell networks, cellular users (also known as macro users or MU) may join, through a handover procedure, one of the neighboring femtocells so as to enhance their communications/increase their respective channel capacities. To avoid frequent communication disruptions owing to effects such as the ping-pong effect, it is necessary to ensure the effectiveness of the cell selection method. Traditionally, such selection method is usually a measured channel/cell quality metric such as the channel capacity, the load of the candidate cell, the received signal strength (RSS), etc. However, one problem with such approaches is that present measured performance does not necessarily reflect the future performance, thus the need for novel cell selection that can predict the \textit{horizon}. Subsequently, we present in this paper a reinforcement learning (RL), i.e, Q- learning algorithm, as a generic solution for the cell selection problem in a non-stationary femtocell network. After comparing our solution for cell selection with different methods in the literature (least loaded (LL), random and capacity-based), simulation results demonstrate the benefits of using learning in terms of the gained capacity and the number of handovers.

AB - In open-access non-stationary femtocell networks, cellular users (also known as macro users or MU) may join, through a handover procedure, one of the neighboring femtocells so as to enhance their communications/increase their respective channel capacities. To avoid frequent communication disruptions owing to effects such as the ping-pong effect, it is necessary to ensure the effectiveness of the cell selection method. Traditionally, such selection method is usually a measured channel/cell quality metric such as the channel capacity, the load of the candidate cell, the received signal strength (RSS), etc. However, one problem with such approaches is that present measured performance does not necessarily reflect the future performance, thus the need for novel cell selection that can predict the \textit{horizon}. Subsequently, we present in this paper a reinforcement learning (RL), i.e, Q- learning algorithm, as a generic solution for the cell selection problem in a non-stationary femtocell network. After comparing our solution for cell selection with different methods in the literature (least loaded (LL), random and capacity-based), simulation results demonstrate the benefits of using learning in terms of the gained capacity and the number of handovers.

UR - http://www.scopus.com/inward/record.url?scp=84864979393&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84864979393&partnerID=8YFLogxK

U2 - 10.1109/VETECS.2012.6240208

DO - 10.1109/VETECS.2012.6240208

M3 - Conference contribution

SN - 9781467309905

BT - IEEE Vehicular Technology Conference

ER -