Multimodal Narrative Generation and Presentation

CISL Generative Narrative Agent

Project Description

Multimodal narrative generation and presentation technologies enable the Situations Room to communicate discovery and insights from the cognitive computers to groups of people. The generated narratives both automatically thread the information from graph data store (e.g. DBpedia) into a meaningful and engaging story, as well as draw on creative computation to make analogies between information points which would otherwise remain buried in the data. Those narratives are then presented to the audience through the immersive situations room using text, pictures, visualizations and data sonification techniques.


Hui Su
Project Director | Founding Director, CISL
Human Computer Interaction, Cognitive User Experience, Visual Analytics, Cloud Computing, Neural Networks
Mei Si
Principal / Co-Principal Investigator, Teaching As Learning Prototype | Associate Professor, Cognitive Science & Graduate Program Director for Critical Game Design
Embodied Conversational Agent, Interactive Narrative, Emotion Modeling, Emotion Detection, Virtual/Augmented Reality, Multi-agent System

Affiliated Faculty

Benjamin Chang
PI / Co-Principal Investigator, Hidden Object / Restaurant Game | Professor, Director of GSAS
virtual reality, experimental games, interactive installation, open source software

Research Staff

david allen
User Experience & Visualization Design | Research Associate
Computational Design, Visualization, Natural Interfaces, Live Performance


Samuel Chabot
Sonification | Ph.D. Candidate, Architectural Acoustics
Spatial data sonification
Matthew Peveler
Reasoning & Planning | Ph.D. Candidate, Computer Science
The usage of theory of mind reasoning and planning in cognitive and immersive systems