Using visual cues to perceptually extract sonified data in collaborative, immersive big-data display systems

Recently, multi-modal presentation systems have gained much interest to study big data with interactive user groups. One of the problems of these systems is to provide a venue for both personalized and shared information. Especially, sound fields containing parallel audio streams can distract users from extracting necessary information. The way spatial information is processed in the brain allows humans to take complicated visuals and focus on details or the whole. However, temporal information, which can be better presented through audio, is processed differently, making dense sound environments difficult to segregate. In Rensselaer’s CRAIVE-Lab, sounds are presented spatially using an array of 134 loudspeakers to address individual participants who are working on analyzing data together. In this talk, we will present and discuss different methods to improve the ability of participants to focus on their designated audio streams using co-modulated visual cues. In this scheme, the virtual reality space is combined with see-through, augmented reality glasses to optimize the boundaries between personalized and global information. [Work supported by NSF #1229391 and the Cognitive and Immersive Systems Laboratory (CISL).]

Reference

Wendy Lee, Samuel Chabot, and Jonas Braasch. "Using visual cues to perceptually extract sonified data in collaborative, immersive big-data display systems,"

The Journal of the Acoustical Society of America 141, 3896 (2017)

Bibtex

@article{lee2017using,
  title={Using visual cues to perceptually extract sonified data in collaborative, immersive big-data display systems},
  author={Lee, Wendy and Chabot, Samuel and Braasch, Jonas},
  journal={The Journal of the Acoustical Society of America},
  volume={141},
  number={5},
  pages={3896--3896},
  year={2017},
  publisher={ASA}
}