TY - GEN
T1 - Future Image Prediction for Mobile Robot Navigation
T2 - 16th International Conference on Intelligent Autonomous Systems, IAS-16 2020
AU - Ishihara, Yu
AU - Takahashi, Masaki
N1 - Funding Information:
Acknowledgment. This study was supported by “A Framework PRINTEPS to Develop Practical Artificial Intelligence” of the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Agency (JST) under Grant Number JPMJCR14E3.
Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - When we perform a task, we select the action by imagining its future consequences. Hence, the ability to predict future states would also be an essential feature for robotic agents because it would allow them to plan effective actions to accomplish given tasks. In this research, we explore an action-conditioned future image prediction model considering its application to navigation tasks for a mobile robot. We investigate the image prediction performance of deep neural network architectures and training strategies with two different camera systems. One camera system is a conventional front-facing camera that has a narrow field of view with high definition, and the other is an omni-directional camera that has a wide field of view with low definition. We compare the performances of prediction models for these two camera systems, and propose using an image prediction model with the omni-directional camera for the navigation tasks of a robot. We evaluate the prediction performance of each camera system through experiments conducted in a complex living room-like environment. We demonstrate that models with an omni-directional camera system outperform models with a conventional front-facing camera. In particular, the model comprising a combination of action-conditioned long short-term memory successfully predicts future images for states of more than 100 steps ahead in both simulation and real-world scenarios. Further, by integrating the proposed system into image-prediction-based navigation algorithm, we demonstrate that navigation based on a model with an omni-directional camera can successfully navigate the robot in cases where one with a conventional front-facing camera fails.
AB - When we perform a task, we select the action by imagining its future consequences. Hence, the ability to predict future states would also be an essential feature for robotic agents because it would allow them to plan effective actions to accomplish given tasks. In this research, we explore an action-conditioned future image prediction model considering its application to navigation tasks for a mobile robot. We investigate the image prediction performance of deep neural network architectures and training strategies with two different camera systems. One camera system is a conventional front-facing camera that has a narrow field of view with high definition, and the other is an omni-directional camera that has a wide field of view with low definition. We compare the performances of prediction models for these two camera systems, and propose using an image prediction model with the omni-directional camera for the navigation tasks of a robot. We evaluate the prediction performance of each camera system through experiments conducted in a complex living room-like environment. We demonstrate that models with an omni-directional camera system outperform models with a conventional front-facing camera. In particular, the model comprising a combination of action-conditioned long short-term memory successfully predicts future images for states of more than 100 steps ahead in both simulation and real-world scenarios. Further, by integrating the proposed system into image-prediction-based navigation algorithm, we demonstrate that navigation based on a model with an omni-directional camera can successfully navigate the robot in cases where one with a conventional front-facing camera fails.
KW - Image prediction
KW - Mobile robot
KW - Omni-directional camera
KW - Visual navigation
UR - http://www.scopus.com/inward/record.url?scp=85128742663&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85128742663&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-95892-3_49
DO - 10.1007/978-3-030-95892-3_49
M3 - Conference contribution
AN - SCOPUS:85128742663
SN - 9783030958916
T3 - Lecture Notes in Networks and Systems
SP - 654
EP - 669
BT - Intelligent Autonomous Systems 16 - Proceedings of the 16th International Conference IAS-16
A2 - Ang Jr, Marcelo H.
A2 - Asama, Hajime
A2 - Lin, Wei
A2 - Foong, Shaohui
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 22 June 2021 through 25 June 2021
ER -