CISL Research

Multimodal Narrative Generation and Presentation

Multimodal narrative generation and presentation technologies enable the Situations Room to communicate discovery and insights from the cognitive computers to groups of people. The generated narratives both automatically thread the information from graph data store (e.g. DBpedia) into a meaningful and engaging story, as well as draw on creative computation to make analogies between information points which would otherwise remain buried in the data. Those narratives are then presented to the audience through the immersive situations room using text, pictures, visualizations and data sonification techniques.

Active Researchers: 
  • Hui Su — Project Director | Director, CISL
  • David Allen — User Experience & Visualization Design | Research Associate, CISL
  • Robert Rouhani — Developer | Media Integration Specialist, CISL
  • Ben Chang — PI / Co-Principal Investigator?, Hidden Object / Restaurant Game | Associate Professor, Arts
  • Mei Si — Principal / Co-Principal Investigator?, Teaching As Learning Prototype | Assistant Professor, Cognitive Science
  • Matt Peveler — Reasoning & Planning | PhD Candidate, Computer Science
  • Zev Battad — Narrative Agent Development | PhD Candidate, Cognitive Science
  • Craig Carlson — Analogy Engine Development | MS Candidate, Cognitive Science
  • Samuel Chabot — Sonification | PhD Candidate, Architectural Acoustics