NACCL-38 Plenary Speeches
Dr. Chu-Ren Huang
Keynote speech: Living Language and Embodied Cognition: Thoughts on the Sciences of Language Inspired by the LLM Challenge.
Dr. Rebecca Hwa
Keynote speech: Symbolism and Meaning in Large Language Models.
Dr. Hongyin Tao
Keynote speech: Leveraging Technology for Multimodality-Oriented Research and L2 Teaching: The Case of Compliments in Chinese.
Dr. Marjorie K. M. Chan
Special guest speech: Basic Emotions, Facial Expressions, and Affective Speech Prosody: Harnessing GenAI to Study Young Nezha.
Click on the following tabs to read about plenary speeches and speaker bios.
Keynote speech
Living Language and Embodied Cognition: Thoughts on the Sciences of Language Inspired by the LLM Challenge
Chu-Ren Huang, The Hong Kong Polytechnic University
A linguistic act, in its simplest and most essential form, is the exchange of information between two people (i.e., two intelligent agents), with two design characteristics: first, the shared access to the expressed linguistic content and second, the lack of access to each other’s internal cognitive processes. I argue in this talk that current linguistic theories are not designed to account for these two design characteristics of language. I further demonstrate that, by accounting for these two design features, language sciences can leverage large language models (LLMs) to advance our understanding of language, cognition, and how humans interact with the environment.
In order to set the stage for this argument, I first introduce the Russian doll (or nested self-hypernym) metaphor for the lexical conceptualization of ‘language’ shared by most (if not all) languages in the world. We use the lexical concept of ‘language’ to refer to a full range of different conceptually related entities, e.g. Chinese as both the modern language we speak and the inherited language that defines our culture, and language as the universal ability to acquire the system (the capitalized Language) to the choices of specific words (e.g., ‘watch your language’). Second, I observe that sensory impairments do not preclude full development of linguistic and cognitive competence. These two observations, along with evidence that LLMs can learn shared sensorimotor understandings from collective data, suggest we need to take a closer look at the role of language data and how it is structured in cognition.
I then argue that, contrary to commonly held assumptions, the dynamic collection of language data is a constant in human cognition, anchoring the diversity of individual experiences and the variations of human brains. The power of language lies in how it enables the effective learning of knowledge grounded on shared language data. LLMs function by applying massive computing power to an unprecedented collection of language big data. Humans thrive by leveraging our embodied experiences and the embodied architecture of our cognition to learn from a smaller set of shared data. This data-centered view provides a straightforward account of language as a self-adoptive complex system, resulting from the dynamic growth of shared data/experience. In addition, the nested self-hypernyms conceptualization of ‘language’ captures the essence of language as a complex system while allowing the foregrounding of different levels of its sub-systems in daily human life.
This new perspective points to a different approach to the multi-brain alignment dilemma. That is, we understand each other not by accessing or guessing at each other’s brain activities but by deriving shared information from our shared linguistic data. Thus, I suggest that the most unique characteristic of human language is its embodied architecture and its instantiation as an ever-evolving collection of shared language data. The information from this shared data architecture has paved a path from LLMs to AI, but it is its embodied architecture that remains a key distinction between human intelligence and machine intelligence.
About the speaker
Chu-Ren Huang, Ph.D. (Cornell), Dr.h.c.(Aix-Marseille), built his academic career at Academia Sinica and The Hong Kong Polytechnic University. He is fascinated by what language can tell us about human cognition and our collective reactions to natural and social environments. His recent books include Cambridge Handbook of Chinese Linguistics, Reference Grammar of Chinese, and Student Grammar of Chinese by Cambridge. He publishes in top journals across different disciplines, including Behavior Research Methods; Computational Linguistics; Cognitive Linguistics; Corpus Linguistics and Linguistic Theories; Humanities and Social Sciences Communications; IEEE Transactions in Affective Computing; Knowledge-based Systems; Language, Cognition and Neuroscience; Language Resources and Evaluation; Language Sciences; Lingua; Linguistics, Linguistics Vanguard; Natural Language Processing; PLoS One; Sage Open; Scientific Reports; etc. He is a Top 2% (Career Long) Scientist in Artificial Intelligence and Image Processing since 2024.
黄居仁,香港理工大学语言科学与技术学系讲座教授。研究方向包括汉语语言学、语料库语言学、数字人文、自然语言处理等。2024-2025 Stanford/Elsevier 生涯累计顶尖学者(AI 领域前百分之二)。美国康乃尔大学博士,法国艾克斯–马赛大学名誉博士。带领建构汉语平衡语料库、历史语料库、结构树库、词网、双语知识本体等语言资源。出版包括《Chinese Language Resources》《Cambridge Handbook of Chinese Linguistics》《Language and Ontology》等二十六本专书或期刊专号,160篇期刊论文, 150篇专书论文,及600篇会议论文。论文见于Behavior Research Methods, Computational Linguistics; Cognitive Linguistics; Corpus Linguistics and Linguistic Theories; Digital Scholarship in the Humanities, Humanities and Social Sciences Communications,IEEE Transactions on Affective Computing, Intercultural Pragmatics, Journal of Chinese Linguistics, Knowledge-based Systems; Language, Cognition and Neuroscience; Language Sciences; Language Resources and Evaluation; Linguistic Vanguard, Natural Language Processing; Perspectives, PLoS One; SageOpen, Scientific Reports 等期刊。
Keynote speech
Symbolism and Meaning in Large Language Models
Rebecca Hwa, The George Washington University
Large language models (LLMs) have demonstrated impressive fluency across many languages, raising questions about what kinds of meaning such models can represent. In this talk, I approach these questions through the lens of symbolism, in which meaning is indirect, culturally mediated, and dependent on convention and context rather than explicit statement.
Drawing on recent computational work on symbolic interpretation, I discuss systematic patterns in how LLMs handle symbolic language. In particular, I focus on distinctions between highly conventionalized symbolic associations and interpretations that are situated, novel, or pragmatically complex. These patterns highlight the kinds of cultural regularities that large-scale models readily encode, as well as the interpretive judgments that remain more delicate or unstable.
Rather than treating these behaviors as simple successes or failures, I argue that they are linguistically informative. LLMs can be viewed as empirical artifacts that embody a particular, distributional view of meaning, whose strengths and blind spots bring into focus long-standing questions in linguistics concerning the nature of convention, the role of context and shared cultural knowledge, and the limits of such a view. The broader goal of this talk is to invite dialogue: how might scholars of Chinese linguistics engage with LLMs not only as tools, but also as objects of analysis in the contemporary study of language and meaning?
About the speaker
Dr. Rebecca Hwa is Professor and Chair of Computer Science at The George Washington University. Her research in computational linguistics focuses on understanding persuasion through AI-driven analysis. Recent projects explore student revision behaviors in argumentative writing, the use of symbolism in visual rhetoric, and the detection of group biases in social media. Prior to joining GW, Dr. Hwa was a faculty member at the University of Pittsburgh. She also served as a Program Officer at the National Science Foundation’s Computer and Information Science and Engineering Directorate, where she co-led the National AI Research Institute program. Dr. Hwa holds a Ph.D. in Computer Science from Harvard University (2001) and a B.S. in Computer Science and Engineering from UCLA (1993).
Keynote speech
Leveraging Technology for Multimodality-Oriented Research and L2 Teaching: The Case of Compliments in Chinese
Hongyin Tao, University of California, Los Angeles
Multimodality-oriented research conceptualizes human communication as the integration of multiple semiotic resources, including grammar, conversational structure, prosody, and embodied actions. This perspective offers a novel way of understanding the relationship between language and related fields. As a result, multimodality has become a central concern across diverse disciplines such as discourse linguistics, language learning and teaching, and artificial intelligence.
In this talk, I demonstrate how AI-based tools can be leveraged for both Chinese linguistic research and the teaching of Chinese as a second language (CSL). Specifically, I show how technologies such as video data processing, speech-to-text conversion, machine translation, and visualization software can support fine-grained qualitative analyses of Chinese across multiple domains (e.g., lexico-grammar, pragmatics, multimodal interaction, and translation), while also enabling large-scale empirical investigation. This integrated approach has the potential to generate insights that would otherwise be difficult to obtain.
From an instructional technology perspective, incorporating AI-assisted multimodal analysis into Chinese L2 instruction can provide learners with innovative ways to engage with authentic Chinese materials, thereby enhancing pedagogical effectiveness. As an illustrative case, I focus on the pragmatic function of compliments in video-recorded naturally occurring conversation, demonstrating how findings from multimodal interactional analysis, assisted by AI tools, can be translated into CSL classroom practice.
About the speaker
Hongyin Tao is a professor of Chinese language and linguistics at UCLA. He also holds an honorary Distinguished Chair Professor position at the National Taiwan Normal University and was the 2014 president of the Chinese Language Teachers Association, USA. His research and teaching focus on the social, cultural, and interactional aspects of Chinese language use in context. Among his over 170 publications are Units in Mandarin Conversation: Prosody, Discourse, and Grammar (John Benjamins 1996), Global Chinese Variation - USA (Commercial Press 2022), and Learner Corpora Construction and Explorations in Chinese and Related Languages (Springer 2023).
Special guest speech
Basic Emotions, Facial Expressions, and Affective Speech Prosody: Harnessing GenAI to Study Young Nezha
Marjorie K.M. Chan (陳潔雯) , The Ohio State University
Animated films with voice actors provide us with an interesting alternative means to study one important facet of multimodal communication, namely, emotions conveyed through the interaction between facial expressions and affective speech prosody. The subject of this study is young Nezha in the animated film, Ne Zha (2019). Voice actor Lü Yanting (吕艳婷) made tremendous efforts to be authentic in portraying Nezha, a young, impulsive child who, like other children, has not yet learned to dissemble and hide his emotions, neither in his speech nor in his facial expressions. The current study uses Paul Ekman’s set of six basic emotions—anger, fear, disgust, happiness, sadness, and surprise (Ekman & Friesen 1971, Ekman 1977)—plus a seventh that he added later, namely, contempt (e.g., Ekman & Cordaro 2011). Proposed as universal, this set of emotions has been widely adopted for studying emotions and facial expressions.
The research here proceeds in two main stages: The first stage consists of an exploration of Ekman’s set of basic emotions that are exhibited by Nezha in his interactions with others, and how these emotions are realized with respect to speech and facial expressions. All seven emotions can be found in Nezha, with an example from each emotion studied here. The contexts, the words spoken, and various prosodic cues (intonation, amplitude, speech tempo, voice quality, etc.) are studied in conjunction with facial expressions and other nonverbal cues to determine how those emotions are conveyed by Nezha.
The second stage harnesses Gemini 3, the free version of Google’s advanced multimodal AI model, which can process text, audio, images, and video, to determine how well it can accurately identify the emotion conveyed by Nezha. Input to Gemini 3 is carried out in four separate trials: (1) the text only (the transcript of the spoken lines), (2) audio (in two separate steps: audio only and audio with the text as part of the prompt, (3) image only (the facial expressions), and (4) video (combined audio and image from the film). Gemini 3 had no difficulty in correctly identifying three of the emotions—Anger, Happiness and Sadness—across all four modes. It found Contempt to be the most difficult, mis-identifying it in three of the modes, thus, correctly identifying it only from the image mode (i.e., facial expression). Gemini 3 selected Disgust as the emotion in the video mode, and outrightly rejected Contempt as a possible answer. It offered a fairly detailed account of its choice: “Based on the visual and auditory data in the third video clip [that is, contempt was the third emotion in the series of prompts and hence the third video clip—mc], the subject is exhibiting Disgust. While some elements overlap with contempt, the physical reaction is more visceral and physiological, fitting the criteria for disgust in the Ekman framework.”
A fuller account of this study will be presented at NACCL-38.
Select References
- Ekman, Paul and Wallace V. Friesen. 1971. Constants Across Cultures in the Face and Emotion. Journal of Personality and Social Psychology 17.2: 124-129.
- Ekman, Paul. 1977. Facial Expression. In: A. Siegman and S. Feldstein (eds.), Nonverbal Behavior and Communication, 97-116. New Jersey: Lawrence Erlbaum Association.
- Ekman, Paul and Daniel Cordaro. 2011. What is Meant by Calling Emotions Basic. Emotion Review 3.4: 364-370.
- Gemini 3. https://gemini.google.com/app.
- Luo, Sitian and Wanyang Xu. 2025. Exclusive Interview with Lyu Yanting: Voicing Nezha, the Legend Who Defied Fate. 21st Century Media. Updated: 2025-02-21. https://www.chinadaily.com.cn/a/202502/21/WS67beb415a310c240449d762d.html (Accessed 02.20.2026)
- Ne Zha. 2019. (Full Chinese title: Nezha zhi Mo Tong Jiang Shi (哪吒之魔童降世, Nezha, The Demon Boy Descends to Earth).) Screenplay written and directed by Jiaozi. Produced by Yunyun Wei and Wenzhang Liu. Distributed by Beijing Enlight Pictures.
About the speaker
Marjorie K. M. Chan (陳潔雯) is Associate Professor of Chinese Linguistics at Ohio State University’s Department of East Asian Languages and Literatures, with an M.A. and Ph.D. program in Chinese Linguistics, plus courtesy appointment in Linguistics. She teaches phonetics/phonology, historical linguistics, dialectology, the writing system, and Chinese opera. She has published on Chinese dialects, humor, sentence-final particles, language and gender, and multimodality involving prosody, especially affective prosody.
She has served as an editorial board member for the Journal of the Chinese Language Teachers Association, Journal of Chinese Language Teaching, Korea Journal of Chinese Language and Literature, Frontiers in Chinese Linguistics, and 南方语言学. She has also served as VP, President and Immediate Past President, as well as inaugural webmaster, for the Chinese Language Teachers Association, Executive Secretary for the International Association of Chinese Linguistics, and web editor for CHINese Oral and PERforming Literature.