Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture

Adam K Bosen, Justin T. Fleming, Sarah E. Brown, Paul D. Allen, William E. O’Neill, Gary D. Paige

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.

Original languageEnglish (US)
Pages (from-to)455-471
Number of pages17
JournalBiological Cybernetics
Volume110
Issue number6
DOIs
StatePublished - Dec 1 2016
Externally publishedYes

Fingerprint

Sound Localization
Auditory Perception
Cues
Audition
Hearing
Research
Experiments

Keywords

  • Audiovisual integration
  • Auditory localization
  • Bayesian inference
  • Visual capture

ASJC Scopus subject areas

  • Biotechnology
  • Computer Science(all)

Cite this

Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture. / Bosen, Adam K; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O’Neill, William E.; Paige, Gary D.

In: Biological Cybernetics, Vol. 110, No. 6, 01.12.2016, p. 455-471.

Research output: Contribution to journalArticle

Bosen, Adam K ; Fleming, Justin T. ; Brown, Sarah E. ; Allen, Paul D. ; O’Neill, William E. ; Paige, Gary D. / Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture. In: Biological Cybernetics. 2016 ; Vol. 110, No. 6. pp. 455-471.
@article{bcbba81815ea495b87c82ca8f9174ae4,
title = "Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture",
abstract = "Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.",
keywords = "Audiovisual integration, Auditory localization, Bayesian inference, Visual capture",
author = "Bosen, {Adam K} and Fleming, {Justin T.} and Brown, {Sarah E.} and Allen, {Paul D.} and O’Neill, {William E.} and Paige, {Gary D.}",
year = "2016",
month = "12",
day = "1",
doi = "10.1007/s00422-016-0706-6",
language = "English (US)",
volume = "110",
pages = "455--471",
journal = "Biological Cybernetics",
issn = "0340-1200",
publisher = "Springer Verlag",
number = "6",

}

TY - JOUR

T1 - Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture

AU - Bosen, Adam K

AU - Fleming, Justin T.

AU - Brown, Sarah E.

AU - Allen, Paul D.

AU - O’Neill, William E.

AU - Paige, Gary D.

PY - 2016/12/1

Y1 - 2016/12/1

N2 - Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.

AB - Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.

KW - Audiovisual integration

KW - Auditory localization

KW - Bayesian inference

KW - Visual capture

UR - http://www.scopus.com/inward/record.url?scp=84994211439&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84994211439&partnerID=8YFLogxK

U2 - 10.1007/s00422-016-0706-6

DO - 10.1007/s00422-016-0706-6

M3 - Article

VL - 110

SP - 455

EP - 471

JO - Biological Cybernetics

JF - Biological Cybernetics

SN - 0340-1200

IS - 6

ER -