The Mandarin Project

Mandarin Project

Project Description

There are many reasons to learn a foreign language. The ability to communicate in multiple languages can lead to both an increased understanding of cultural differences throughout our world, as well as an increased ability to thrive in the global economy. 

The Mandarin Project is a cognitive learning environment at the forefront of advanced research in cyber-enabled classrooms for Language Learning. It leverages cognitive and mixed reality technology to immerse students in a sample of Chinese culture, practice daily tasks, and get help from intelligent agents. The Mandarin Project leverages established language pedagogy and advanced teaching methods to enable intelligent agents that language learners can listen to and mimic as part of their learning process, within a simulated cultural context. Teaching and learning research are combined in the form of an innovative curriculum for the most widely spoken language in the world. The unprecedented nature of the Mandarin Project allows for researchers to explore all of the possibilities of using artificial intelligence in the classroom.

In the spring of 2013, professors Lee Sheldon, Ben Chang, and Mei Si, collaborated with Jianling Yue and Yalun Zhou to create the Mandarin Project: a hybrid of immersive classroom experiences and virtual reality adventures designed to engage students as they master Chinese. Lee Sheldon, a professional game writer, designer, and current professor at WPI, is the author of  the bestselling book The Multiplayer Classroom: Designing Coursework as a Game (2011); his book Character Development and Storytelling for Games (Second Edition, 2013) is the standard text in the gaming industry. Ben Chang is an electronic artist and director of the Games and Simulation Arts and Sciences program at Rensselaer. His work explores the intersections of virtual environments and experimental gaming bringing out the chaotic, human qualities in technological systems. Mei Si has an expertise in artificial intelligence and its application in virtual and mixed realities. Her research concentrates on computer-aided interactive narratives, embodied conversational agents and pervasive user interface, elements that make virtual environments more engaging and effective. During their collaboration, the team adopted the philosophy that language is always situated in a culture and in a physical location—connected to a place and to people, not just an isolated thing. So, to experience language in that way, in a cultural and spatial context, is a valuable thing. With the additional elements of gameplay and narrative, there are heightened motivational, interest, and engagement levels with the learning.

The Mandarin Project uses speech-based dialogue for communication and information exchange in Mandarin between students and agents in an immersive environment. This environment, which is housed within the Situations Room, allows students to engage in task-oriented group activities focusing on communication and other social practices. The Mandarin Project leverages technical breakthroughs in the Cognitive and Immersive Systems Lab to create the fruitful, pedagogical innovation used in a language teaching course beginning Summer Arch of 2019. In order to achieve their idea of immersive language learning, several prototypes — The Lost Manuscript, and The Tea House Experience — were developed as virtual reality games by Sheldon, Chang, Zhou, Yue, and Si. Here Chinese language students begin the session imagining themselves in a regular classroom, but soon they find they must take a surprise study trip to China. Through the game, students become completely immersed in Chinese locations and culture where they must interact with various characters to complete tasks. The situations the students find themselves in are interesting, mysterious, but above all, designed to facilitate a fluency in the language much like a native speaker.

The cognitive aspects of Mandarin Project were then further developed by The Cognitive and Immersive Systems Lab (CISL) into its current iteration as a virtual cultural immersion trip to Chine. CISL is a collaboration between IBM Research and RPI, and the Mandarin Project one of the four uses cases in this Lab; its goals include combining the benefits of a cognitive, immersive technology to let students engage in a cultural environment, practice daily tasks, and obtain help from intelligent agents. The Mandarin Project is able to leverage cognitive technology to act as a personalized language tutor. Students can listen to and repeat correct pronunciation and intonation while learning the language.

Mission

The Mandarin Project is working to integrate three key components to create a new type of learning environment: game, immersion, and interaction with cognitive agents. At a point in time when most language learning researchers are working with virtual reality or augmented reality, the Mandarin Project is investigating human-scale environments where students can physically walk around without having to wear specialized equipment through the Cognitive and Immersive Systems Lab. These affordances present a unique approach which has the potential to greatly benefit the language and cultural learning the user is able to gain from the experience.

By using an engaging medium to teach language, students are better motivated to absorb language. Using an educational game is more engaging that sitting through a lecture and trying to learn a language from rote memorization. In a similar vein, the immersive aspects of games help the user feel as though they are practicing their language skills in a real-life situation, but without the stressors that come from speaking with a native speaker. And, since the cognitive agents use artificial intelligence for their language processing, students are able to converse with conversation partners who have infinite patience to help them through their learning experiences.

Cognitive computing systems are increasingly prevalent in our society. The ways in which we engage information in our daily lives are becoming ever more immersive. The paradigm of human computer interaction will soon shift towards partnership like these between human beings and intelligent machines through human-scale and immersive experiences.

Vision

The Mandarin Project will establish RPI as a leader in cyber-enabled language learning research through in-classroom user studies in Mandarin instruction by Summer Arch 2019. While the project in its current capacity is focused around multimodal input, speech, and language, there are future plans to involve a broader physically immersive experience for a large number of participants. Students will one day be able to walk through different environments and engage with a society of agents that provide more rich and intimate interactions that refine language skills and nuances. Students will be able to perform daily tasks that solidify the information acquired from classes and allow for the information to be used in a realistic setting. Researchers are currently working to provide a variety of multifaceted storylines that change based on the students’ choices.

The future of language practice lies in multimodal immersive systems. Being able to listen to others speak, as well as receiving critique on pronunciation, is imperative to master a language. For a tone-based language like Mandarin Chinese, where proper intonation dramatically affects the meaning of words, immersion is critical for improved pronunciation.

Current solutions for immersive language practice, however, are often inaccessible. Studying abroad is expensive, and most educational programs do not have the resources to provide conversational partners to work with students one-on-one—the same immersive, multimodal speech practice that travel offers. While most language learning programs are also eager to incorporate new technology, the options currently available make it challenging to find viable solutions. Many language learning services are conveniently offered online, yet they focus on the memorization of vocabulary rather than speech practice. As a result, they underperform when compared to immersive, multimodal speech practice.

Approach

Pedagogy

The Mandarin Project has leveraged speech-based dialogue for communication and information exchange in Mandarin between students and agents in an immersive environment. It utilizes task-based language learning to give students meaningful language practice in order to develop a sense of confidence and fluency in the language. Students are engaged in task-oriented group activities through communication and interpersonal social practices. This project aims to expose students to the best and most effective language learning environment. Constant exposure with the Mandarin Project may cause students to speak more naturally with faster language acquisition, since they are able to interact in real-life situations without needing to travel abroad. 

Without the immersive learning environment, students cannot learn how to speak on the same level as a native speaker. Rather than teaching the students basic vocabulary and having them rely on relevant outside media to teach natural speech structure, the Mandarin project allows for the students to learn language as one would if they were living in a Chinese speaking community. The narrative storytelling aspect of the project allows students to learn linguistics and vocabulary that is up-to-date with modern speech. The project takes a holistic perception of Mandarin linguistics, vocabulary, and culture aiming to produce students who can speak with a proficient level of fluency and understanding. 

The current lesson focuses on ordering from a restaurant entirely in Mandarin. Students must navigate through different restaurant-oriented tasks, such as asking for a table, ordering from a waiter, and paying for their meal. The students taking part in this simulation are not alone, however. They have the option to ask for help from the eternally patient waiter, a friendly panda avatar powered by IBM’s Watson. Along with providing vocabulary and pronunciation support, Watson gives cultural information about the menu items to enrich the user’s experience. 

Technology—How the Situations Room is Used for the Purpose of Language Learning

In order to fully integrate the power of Cognitive Computing and Immersive Computing together for language teaching, the Cognitive and Immersive Systems Lab set up an immersive classroom environment in EMPAC Studio 2. Chinese and English speech recognition, conversation with natural language processing, multimodal interaction, story, and more were enabled using the Cognitive computing technologies. 

The Mandarin Project is proud to be a research-based Chinese learning environment. Researchers across multiple disciplines are working together to create a modern, multimodal learning environment that uses artificial intelligence to its advantage. This allows the project to be flexible and focus on several areas of development to find what is technically useful. The researchers are working to use artificial intelligence to integrate speech and gesture recognition within the classroom to monitor student interaction and dialogue. 

Currently, the research team is focused on ways to help students visually analyze tone differences to perfect their speech.

 

Findings

Beginning learners

For those just beginning to learn Mandarin Chinese, it was found that games were an engaging form of language instruction. In the most recent test of the Chinese Restaurant simulation, it was found that students appreciated being able to gesture to an item on the menu when they were unsure of how to order. The students also found the interactive nature of the restaurant game to be the helpful for their acquisition because of how it mirrored real-life situations. 

Additionally, the students were provided with the option to ask the waiter for help in their conversations. Each of the groups tested did find themselves using this feature of the simulation to facilitate with meaningful conversation that broadened their knowledge of the intricacies of the Mandarin language.    

Intermediate/Advanced learners

The more advanced learners were found to engage well with the more experimental equipment. Dr. Helen Zhou, who used the massively multiplayer online role playing game ZON to teach Mandarin at Michigan state, could use the interactions between the higher level students and the Mandarin Project to improve her own methods of language instruction. This group has yet to test the Mandarin Project’s immersive classroom. 

 

Faculty

Hui Su
Project Director | Director, CISL
Human Computer Interaction, Cognitive User Experience, Visual Analytics, Cloud Computing, Neural Networks
Jonas Braasch
Co-Principal Investigator, Tone & Pitch Analysis Technology | Professor/Director for Academic and Administrative Operations
Spatial Hearing, Intelligent Music Systems
Mei Si
Principal / Co-Principal Investigator, Teaching As Learning Prototype | Associate Professor
Embodied Conversational Agent, Interactive Narrative, Emotion Modeling, Emotion Detection, Virtual/Augmented Reality, Multi-agent System
Yalun (Helen) Zhou
Content, Curriculum, and Evaluation Development | Assistant Professor
applied linguistics and educational linguistics, narrative-based, game-enhanced Chinese learning, learning English/Chinese as a second/foreign language

Affiliated Faculty

Benjamin Chia-Ming Chang
PI / Co-Principal Investigator, Hidden Object / Restaurant Game | Director of GSAS and Professor
virtual reality, experimental games, interactive installation, open source software
Kelvin Qin
Speech to Text Transcription Expert | Senior Manager, IBM Research, China
Lee Sheldon
Professor of Practice
Game design and development, Multi-player games, Simulation

Research Staff

david allen
Project Manager, User Experience & Visualization Design | Research Associate
Computational Arts, Visualization, Natural Interfaces, Live Performance
Jaimie Drozdal
User Experience Research | Research Specialist
User studies, Human-computer interactions, Second language learning, Group dynamics

Former Research Staff

Robert Rouhani
Developer | Media Integration Specialist, CISL

Students

Lilit Balagyozyan
User Experience Design | UX/UI Designer
/*-->*/ User experience and interface design
Zev Battad
Narrative Agent Development | Ph.D. Candidate, Cognitive Science
Interactive narrative generation
Yueqing Dai
User Experience Design | B.S. Communications & Media, Games & Simulation Arts & Sciences
Design
Rahul R. Divekar
Group Dynamics | PhD Candidate, Computer Science
Data analytics
Guangyong Li
Research Developer | B.S. Computer Systems Engineering
Hongyang Lin
Dialog Design & Development | B.S. Computer Science, Games & Simulation Arts & Sciences
Matthew Peveler
Reasoning & Planning | Ph.D. Candidate, Computer Science
The usage of theory of mind reasoning and planning in cognitive and immersive systems
Kang Wang
Facial Recognition | Ph.D. Candidate, Electrical Engineering
Head pose, face recognition
Huang Zou
Signal Processing | B.S. Computer Science

Alumni

Linnea Cajuste
B.S. Computer and Systems Engineering
Craig Carlson
Analogy Engine Development | M.S, Cognitive Science
Computational creativity
Boning Dong
Research Developer | B.S. Computer Systems Engineering, Cognitive Science
William Kim
B.S. Design, Innovation, & Society
Ziyi Lu
B.S. Computer Science
Rose Clare Pisacano
Dialog Design | B.E. Environmental Engineering
Ziyi Song
B.S. Computer Software Engineering and Computer Science
Rui Zhao
Gesture Recognition | Ph.D. Electrical, Computers and System Engineering
Gesture recognition

Publications

2019

  1. david allen, Rahul R. Divekar, Jaimie Drozdal, Lilit Balagyozyan, Shuyue Zheng, Ziyi Song, Huang Zou, Jeramey Tyler, Xiangyang Mou, Rui Zhao, Helen Zhou, Jianling Yue, Jeffrey O. Kephart, Hui Su. "The Rensselaer Mandarin Project — A Cognitive and Immersive Language Learning Environment," Proceedings of the AAAI Conference on Artificial Intelligence - Demonstration Track, vol. 33, pp. 9845-9846. 2019. [Publication, Abstract]
  2. Jeramey Tyler, Huang Zou, Helen Zhou, Hui Su, and Jonas Braasch. "Automated Mandarin tone classification using deep neural networks trained on a large speech dataset," The Journal of the Acoustical Society of America 145, no. 3 (2019): 1814-1814. [Publication, Abstract]
  3. Rahul R. Divekar, Xiangyang Mou, Lisha Chen, Maíra Gatti de Bayser, Melina Alberio Guerra, Hui Su. "Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue," Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence Demos. Pages 6512-6514 [Publication, Abstract]
  4. Samuel Chabot, Jaimie Drozdal, Yalun Zhou, Hui Su, and Jonas Braasch. "Language Learning in a Cognitive and Immersive Environment Using Contextualized Panoramic Imagery," In: Stephanidis C. (eds) HCI International 2019 - Posters. HCII 2019. Communications in Computer and Information Science, vol 1034. Springer, Cham [Publication, Abstract]

2018

  1. Divekar, Rahul R., Jaimie Drozdal, Yalun Zhou, Ziyi Song, David Allen, Robert Rouhani, Rui Zhao, Shuyue Zheng, Lilit Balagyozyan, and Hui Su.. "Interaction Challenges in AI Equipped Environments Built to Teach Foreign Languages Through Dialogue and Task-Completion.," In Proceedings of the 2018 on Designing Interactive Systems Conference 2018, pp. 597-609. ACM, 2018. [Publication, Abstract]
  2. Divekar, Rahul R., Yalun Zhou, David Allen, Jaimie Drozdal, and Hui Su. "Building Human-Scale Intelligent Immersive Spaces for Foreign Language Learning," [Publication, Abstract]