TY - GEN
T1 - Near-Optimal Regret Bounds for Contextual Combinatorial Semi-Bandits with Linear Payoff Functions
AU - Takemura, Kei
AU - Ito, Shinji
AU - Hatano, Daisuke
AU - Sumita, Hanna
AU - Fukunaga, Takuro
AU - Kakimura, Naonori
AU - Kawarabayashi, Ken Ichi
N1 - Funding Information:
SI was supported by JST, ACT-I, Grant Number JP-MJPR18U5, Japan. TF was supported by JST, PRESTO, Grant Number JPMJPR1759, Japan. NK and KK were supported by JSPS, KAKENHI, Grant Number JP18H05291, Japan.
Publisher Copyright:
Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2021
Y1 - 2021
N2 - The contextual combinatorial semi-bandit problem with linear payoff functions is a decision-making problem in which a learner chooses a set of arms with the feature vectors in each round under given constraints so as to maximize the sum of rewards of arms. Several existing algorithms have regret bounds that are optimal with respect to the number of rounds T. However, there is a gap of Õ(max(√d, √k)) between the current best upper and lower bounds, where d is the dimension of the feature vectors, k is the number of the chosen arms in a round, and Õ(·) ignores the logarithmic factors. The dependence of k and d is of practical importance because k may be larger than T in real-world applications such as recommender systems. In this paper, we fill the gap by improving the upper and lower bounds. More precisely, we show that the C2UCB algorithm proposed by Qin, Chen, and Zhu (2014) has the optimal regret bound Õ(d√kT + dk) for the partition matroid constraints. For general constraints, we propose an algorithm that modifies the reward estimates of arms in the C2UCB algorithm and demonstrate that it enjoys the optimal regret bound for a more general problem that can take into account other objectives simultaneously. We also show that our technique would be applicable to related problems. Numerical experiments support our theoretical results and considerations.
AB - The contextual combinatorial semi-bandit problem with linear payoff functions is a decision-making problem in which a learner chooses a set of arms with the feature vectors in each round under given constraints so as to maximize the sum of rewards of arms. Several existing algorithms have regret bounds that are optimal with respect to the number of rounds T. However, there is a gap of Õ(max(√d, √k)) between the current best upper and lower bounds, where d is the dimension of the feature vectors, k is the number of the chosen arms in a round, and Õ(·) ignores the logarithmic factors. The dependence of k and d is of practical importance because k may be larger than T in real-world applications such as recommender systems. In this paper, we fill the gap by improving the upper and lower bounds. More precisely, we show that the C2UCB algorithm proposed by Qin, Chen, and Zhu (2014) has the optimal regret bound Õ(d√kT + dk) for the partition matroid constraints. For general constraints, we propose an algorithm that modifies the reward estimates of arms in the C2UCB algorithm and demonstrate that it enjoys the optimal regret bound for a more general problem that can take into account other objectives simultaneously. We also show that our technique would be applicable to related problems. Numerical experiments support our theoretical results and considerations.
UR - http://www.scopus.com/inward/record.url?scp=85130094629&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130094629&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85130094629
T3 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
SP - 9791
EP - 9798
BT - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
PB - Association for the Advancement of Artificial Intelligence
T2 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
Y2 - 2 February 2021 through 9 February 2021
ER -