In a human–robot interaction, a robot must move to a position where the robot can obtain precise information of people, such as positions, postures, and voice. This is because the accuracy of human recognition depends on the positional relation between the person and robot. In addition, the robot should choose what sensor data needs to be focused on during the task that involves the interaction. Therefore, we should change a path approaching the people to improve human recognition accuracy for ease of performing the task. Accordingly, we need to design a path-planning method considering sensor characteristics, human recognition accuracy, and the task contents simultaneously. Although some previous studies proposed path-planning methods considering sensor characteristics, they did not consider the task and the human recognition accuracy, which was important for practical application. Consequently, we present a path-planning method considering the multimodal information which fusion the task contents and the human recognition accuracy simultaneously.