Skip to main content

Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue

In a setting where two AI agents embodied as animated humanoid avatars are engaged in a conversation with one human and each other, we see two challenges. One, determination by the AI agents about which one of them is being addressed. Two, determination by the AI agents if they may/could/should speak at the end of a turn. In this work we bring these two challenges together and explore the participation of AI agents in multi-party conversations. Particularly, we show two embodied AI shopkeeper agents who sell similar items aiming to get the business of a user by competing with each other on the price. In this scenario, we solve the first challenge by using headpose (estimated by deep learning techniques) to determine who the user is talking to. For the second challenge we use deontic logic to model rules of a negotiation conversation.

Reference

"Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue,"

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence Demos. Pages 6512-6514

Bibtex

@inproceedings{divekar2019embodied,
  title={Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue},
  author={Divekar, Rahul R and Mou, Xiangyang and Chen, Lisha and de Bayser, Maira Gatti and Guerra, Melina Alberio and Su, Hui},
  year={2019},
  organization={IJCAI}
}