When a user manipulates a system, a user input through an interface, or an operation, is converted to the user's intended action according to the mapping that links operations and actions, which we call "operation mapping". Although many operation mappings are created by designers assuming how a typical user would operate the system, the optimal operation mapping may vary from user to user. The designer cannot prepare in advance all possible operation mappings. One approach to solve this problem involves autonomous learning of an operation mapping during the operation. However, existing methods require manual preparation of scenes for learning mappings. We propose advantage mapping, which enables the efficient learning of operation mappings. Working from the idea that scenes in which the user's desired action is predictable are useful for learning operation mappings, advantage mapping extracts scenes according to the magnitude of entropy in the output of the action value function acquired from reinforcement learning. In our experiment, the user's ideal operation mapping was more accurately obtained from the scenes selected by advantage mapping than from learning through actual play.