When we perform a task, we select the action by imagining its future consequences. Hence, the ability to predict future states would also be an essential feature for robotic agents because it would allow them to plan effective actions to accomplish given tasks. In this research, we explore an action-conditioned future image prediction model considering its application to navigation tasks for a mobile robot. We investigate the image prediction performance of deep neural network architectures and training strategies with two different camera systems. One camera system is a conventional front-facing camera that has a narrow field of view with high definition, and the other is an omni-directional camera that has a wide field of view with low definition. We compare the performances of prediction models for these two camera systems, and propose using an image prediction model with the omni-directional camera for the navigation tasks of a robot. We evaluate the prediction performance of each camera system through experiments conducted in a complex living room-like environment. We demonstrate that models with an omni-directional camera system outperform models with a conventional front-facing camera. In particular, the model comprising a combination of action-conditioned long short-term memory successfully predicts future images for states of more than 100 steps ahead in both simulation and real-world scenarios. Further, by integrating the proposed system into image-prediction-based navigation algorithm, we demonstrate that navigation based on a model with an omni-directional camera can successfully navigate the robot in cases where one with a conventional front-facing camera fails.