In this paper, we present a framework for 3D visual odometry applied to a hospital-use transfer robot equipped with an omni-drive system. The approach is based on features extracted out of and matched in monocular image sequences. We propose a new feature detection and tracking scheme robust to motion blur and well suitable in environment, as a hospital, where the local features are sparse and not much distinctive. Moreover, this approach match very well the omnidirectional drive robot for two reasons: the poorly reliable odometry of this drive system and the brisk jerks these robots experience when they cross an uneven floor. Experiments performed on a relatively long path in an indoor environment with repetitive patterns and sparse local features show the effectiveness of the proposed technique in reliably extracting and matching features and in the generation of a correct visual odometry. Results are obtained without the aid of any external sensors other than the robot's low-cost camera. We also presents a performance evaluation of the proposed features detection-descriptor scheme.