Modulation of early auditory processing by visually based sound prediction

Atsushi Aoyama, Hiroshi Endo, Satoshi Honda, Tsunehiro Takeda

Research output: Contribution to journalArticle

10 Citations (Scopus)

Abstract

Brain activity was measured by magnetoencephalography (MEG) to investigate whether the early auditory system can detect changes in audio-visual patterns when the visual part is presented earlier. We hypothesized that a template underlying the mismatch field (MMF) phenomenon, which is usually formed by past sound regularities, is also used in visually based sound prediction. Activity similar to the MMF may be elicited by comparing an incoming sound with the template. The stimulus was modeled after a keyboard: an animation in which one of two keys was depressed was accompanied by either a lower or higher tone. Congruent audio-visual pairs were designed to be frequent and incongruent pairs to be infrequent. Subjects were instructed to predict an incoming sound based on key movement in two sets of trials (prediction condition), whereas they were instructed not to do so in the other two sets (non-prediction condition). For each condition, the movement took 50 ms in one set (Δ = 50 ms) and 300 ms in the other (Δ = 300 ms) to reach the bottom, at which time a tone was delivered. As a result, only under the prediction condition with Δ = 300 ms was additional activity for incongruent pairs observed bilaterally in the supratemporal area within 100-200 ms of the auditory stimulus onset; this activity had spatio-temporal properties similar to those of MMF. We concluded that a template is created by the visually based sound prediction only after the visual discriminative and sound prediction processes have already been performed.

Original languageEnglish
Pages (from-to)194-204
Number of pages11
JournalBrain Research
Volume1068
Issue number1
DOIs
Publication statusPublished - 2006 Jan 12

Fingerprint

Magnetoencephalography
Brain

Keywords

  • Magnetoencephalography
  • Mismatch field
  • Visually based sound prediction

ASJC Scopus subject areas

  • Neuroscience(all)
  • Clinical Neurology
  • Developmental Biology
  • Molecular Biology

Cite this

Modulation of early auditory processing by visually based sound prediction. / Aoyama, Atsushi; Endo, Hiroshi; Honda, Satoshi; Takeda, Tsunehiro.

In: Brain Research, Vol. 1068, No. 1, 12.01.2006, p. 194-204.

Research output: Contribution to journalArticle

Aoyama, Atsushi ; Endo, Hiroshi ; Honda, Satoshi ; Takeda, Tsunehiro. / Modulation of early auditory processing by visually based sound prediction. In: Brain Research. 2006 ; Vol. 1068, No. 1. pp. 194-204.
@article{5bf03d5970cb41ae98e54b8bd705c6a6,
title = "Modulation of early auditory processing by visually based sound prediction",
abstract = "Brain activity was measured by magnetoencephalography (MEG) to investigate whether the early auditory system can detect changes in audio-visual patterns when the visual part is presented earlier. We hypothesized that a template underlying the mismatch field (MMF) phenomenon, which is usually formed by past sound regularities, is also used in visually based sound prediction. Activity similar to the MMF may be elicited by comparing an incoming sound with the template. The stimulus was modeled after a keyboard: an animation in which one of two keys was depressed was accompanied by either a lower or higher tone. Congruent audio-visual pairs were designed to be frequent and incongruent pairs to be infrequent. Subjects were instructed to predict an incoming sound based on key movement in two sets of trials (prediction condition), whereas they were instructed not to do so in the other two sets (non-prediction condition). For each condition, the movement took 50 ms in one set (Δ = 50 ms) and 300 ms in the other (Δ = 300 ms) to reach the bottom, at which time a tone was delivered. As a result, only under the prediction condition with Δ = 300 ms was additional activity for incongruent pairs observed bilaterally in the supratemporal area within 100-200 ms of the auditory stimulus onset; this activity had spatio-temporal properties similar to those of MMF. We concluded that a template is created by the visually based sound prediction only after the visual discriminative and sound prediction processes have already been performed.",
keywords = "Magnetoencephalography, Mismatch field, Visually based sound prediction",
author = "Atsushi Aoyama and Hiroshi Endo and Satoshi Honda and Tsunehiro Takeda",
year = "2006",
month = "1",
day = "12",
doi = "10.1016/j.brainres.2005.11.017",
language = "English",
volume = "1068",
pages = "194--204",
journal = "Brain Research",
issn = "0006-8993",
publisher = "Elsevier",
number = "1",

}

TY - JOUR

T1 - Modulation of early auditory processing by visually based sound prediction

AU - Aoyama, Atsushi

AU - Endo, Hiroshi

AU - Honda, Satoshi

AU - Takeda, Tsunehiro

PY - 2006/1/12

Y1 - 2006/1/12

N2 - Brain activity was measured by magnetoencephalography (MEG) to investigate whether the early auditory system can detect changes in audio-visual patterns when the visual part is presented earlier. We hypothesized that a template underlying the mismatch field (MMF) phenomenon, which is usually formed by past sound regularities, is also used in visually based sound prediction. Activity similar to the MMF may be elicited by comparing an incoming sound with the template. The stimulus was modeled after a keyboard: an animation in which one of two keys was depressed was accompanied by either a lower or higher tone. Congruent audio-visual pairs were designed to be frequent and incongruent pairs to be infrequent. Subjects were instructed to predict an incoming sound based on key movement in two sets of trials (prediction condition), whereas they were instructed not to do so in the other two sets (non-prediction condition). For each condition, the movement took 50 ms in one set (Δ = 50 ms) and 300 ms in the other (Δ = 300 ms) to reach the bottom, at which time a tone was delivered. As a result, only under the prediction condition with Δ = 300 ms was additional activity for incongruent pairs observed bilaterally in the supratemporal area within 100-200 ms of the auditory stimulus onset; this activity had spatio-temporal properties similar to those of MMF. We concluded that a template is created by the visually based sound prediction only after the visual discriminative and sound prediction processes have already been performed.

AB - Brain activity was measured by magnetoencephalography (MEG) to investigate whether the early auditory system can detect changes in audio-visual patterns when the visual part is presented earlier. We hypothesized that a template underlying the mismatch field (MMF) phenomenon, which is usually formed by past sound regularities, is also used in visually based sound prediction. Activity similar to the MMF may be elicited by comparing an incoming sound with the template. The stimulus was modeled after a keyboard: an animation in which one of two keys was depressed was accompanied by either a lower or higher tone. Congruent audio-visual pairs were designed to be frequent and incongruent pairs to be infrequent. Subjects were instructed to predict an incoming sound based on key movement in two sets of trials (prediction condition), whereas they were instructed not to do so in the other two sets (non-prediction condition). For each condition, the movement took 50 ms in one set (Δ = 50 ms) and 300 ms in the other (Δ = 300 ms) to reach the bottom, at which time a tone was delivered. As a result, only under the prediction condition with Δ = 300 ms was additional activity for incongruent pairs observed bilaterally in the supratemporal area within 100-200 ms of the auditory stimulus onset; this activity had spatio-temporal properties similar to those of MMF. We concluded that a template is created by the visually based sound prediction only after the visual discriminative and sound prediction processes have already been performed.

KW - Magnetoencephalography

KW - Mismatch field

KW - Visually based sound prediction

UR - http://www.scopus.com/inward/record.url?scp=31344444434&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=31344444434&partnerID=8YFLogxK

U2 - 10.1016/j.brainres.2005.11.017

DO - 10.1016/j.brainres.2005.11.017

M3 - Article

C2 - 16368082

AN - SCOPUS:31344444434

VL - 1068

SP - 194

EP - 204

JO - Brain Research

JF - Brain Research

SN - 0006-8993

IS - 1

ER -