Evolving subjective utilities: Prisoner's dilemma game examples

Koichi Moriyama, Satoshi Kurihara, Masayuki Numao

Research output: Contribution to conferencePaperpeer-review

8 Citations (Scopus)

Abstract

We have proposed the utility-based Q-learning concept that supposes an agent internally has an emotional mechanism that derives subjective utilities from objective rewards and the agent uses the utilities as rewards of Q-learning. We have also proposed such an emotional mechanism that facilitates cooperative actions in Prisoner's Dilemma (PD) games. However, this mechanism has been designed and implemented manually in order to force the agents to take cooperative actions in PD games. Since it seems slightly unnatural, this work considers whether such an emotional mechanism exists and where it comes from. We try to evolve such mechanisms that facilitate cooperative actions in PD games by conducting simulation experiments with a genetic algorithm, and we investigate the evolved mechanisms from various points of view. Categories and Subject Descriptors 1.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-intelligent agents, multiagent systems General Terms Experimentation.

Original languageEnglish
Pages217-224
Number of pages8
Publication statusPublished - 2011 Jan 1
Externally publishedYes
Event10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011 - Taipei, Taiwan, Province of China
Duration: 2011 May 22011 May 6

Other

Other10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011
Country/TerritoryTaiwan, Province of China
CityTaipei
Period11/5/211/5/6

Keywords

  • Aadaptation
  • Evolution
  • Game theory
  • Multiagent learning
  • Reward structures for learning

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Evolving subjective utilities: Prisoner's dilemma game examples'. Together they form a unique fingerprint.

Cite this