Audio-Visual Self-Supervised Terrain Type Recognition for Ground Mobile Platforms

Akiyoshi Kurobe, Yoshikatsu Nakajima, Kris Kitani, Hideo Saito

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

The ability to recognize and identify terrain characteristics is an essential function required for many autonomous ground robots such as social robots, assistive robots, autonomous vehicles, and ground exploration robots. Recognizing and identifying terrain characteristics is challenging because similar terrains may have very different appearances (e.g., carpet comes in many colors), while terrains with very similar appearance may have very different physical properties (e.g., mulch versus dirt). In order to address the inherent ambiguity in vision-based terrain recognition and identification, we propose a multi-modal self-supervised learning technique that switches between audio features extracted from a microphone attached to the underside of a mobile platform and image features extracted by a camera on the platform to cluster terrain types. The terrain cluster labels are then used to train an image-based real-time CNN (Convolutional Neural Network) to predict terrain types changes. Through experiments, we demonstrate that the proposed self-supervised terrain type recognition method achieves over 80% accuracy, which greatly outperforms several baselines and suggests strong potential for assistive applications.

Original languageEnglish
Article number9354792
Pages (from-to)29970-29979
Number of pages10
JournalIEEE Access
Volume9
DOIs
Publication statusPublished - 2021

Keywords

  • CNN
  • Ground robots
  • assistive application
  • self-supervised learning

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Fingerprint

Dive into the research topics of 'Audio-Visual Self-Supervised Terrain Type Recognition for Ground Mobile Platforms'. Together they form a unique fingerprint.

Cite this