본문 바로가기

Invited Speakers

PROGRAMME Invited Speakers
Nicoletta Calzorari
Speaker Martha Palmer
Affiliation University of Colorado, USA
Title Deep Semantics
Bio
Martha Palmer is a Professor of Linguistics and Computer Science at the University of Colorado, with 300+ peer-reviewed publications. She is co-Director of CLEAR, Director of CU’s Professional Masters in Computational Linguistics, an ACL Fellow, a AAAI Fellow, and co-editor of LiLT: Linguistic Issues in Language Technology. She has served as co- editor of the Journal of Natural Language Engineering, on the editorial board of Computational Linguistics and TACL, President of ACL, Chair of SIGLEX, and Founding Chair of SIGHAN.
Contents
This talk will discuss symbolic representations of sentences in context, focusing on abstract meaning representations (AMR), examining their capability for capturing certain aspects of meaning. A focus will be how AMR’s can be expanded to encompass figurative language, the recovery of implicit arguments and relations between events. These examples will be in English, and indeed some features of AMR are English-centric. Uniform Meaning Representations, a multi-sentence annotation scheme that is revising AMRs to make them more suitable for other languages, especially low resource languages will be introduced. UMRs include more formal logical scope, number, tense, aspect and modality as well as temporal relations. The talk will conclude with a discussion of ways in which these meaning representations can be enriched even more, by mapping to Wikidata Qnodes or through accessing the rich logical representations present in VerbNet, and thereby provide additional benefit to challenging applications.
Marco Baroni
Speaker Marco Baroni
Affiliation ICREA and Pompeu Fabra University, Spain
Title Machine-to-machine communication: Do we need it? What should it be like? Is it "language"?
Bio
Marco Baroni obtained a PhD in Linguistics from UCLA in 2000. Since 2019, he is ICREA research professor at Pompeu Fabra University in Barcelona. Marco's work in the areas of multimodal and compositional distributed semantics has received widespread recognition, including a Google Research Award, an ERC Grant, the ICAI-JAIR best paper prize and the ACL test-of-time award. Marco was recently awarded another ERC grant to study how to improve machine-to-machine communication, taking inspiration from human and animal communication systems.
Contents
Deep nets are great, and they would be even greater if they could collaborate with each other through a flexible communication protocol. Since many networks come equipped with language model interfaces--that is, they can process and produce tokens in English (or other languages)--it makes sense to look into natural language as a universal neural network communication protocol. The questions I would like to discuss in my talk, then, are the following:
  • What are the challenges we face in developing a universal language interface among deep networks?
  • What do we currently know about the way in which deep nets are using language to interact in machine-to-machine communication scenarios?
  • Which characteristics of human linguistic communication do we expect or wish machine-to-machine communication to possess? Should it be essentially the same as human-to-human (and human-to-machine) communication?
Nicoletta Calzorari
Speaker Kentaro Inui
Affiliation Tohoku University / RIKEN
Title Explainability in Automated Writing Evaluation
Bio
Kentaro Inui is a Professor at the Graduate School of Information Sciences at Tohoku University, leading the Tohoku NLP Group and the NLU Team at AIP Center, RIKEN. His career has spanned many NLP topics and received awards, including EACL Outstanding Paper, AMT Best Paper, and Google AI Focused Research. He has served on the editorial board of Computational Linguistics, Editor in Chief of Journal of Information Processing, General Chair of EMNLP-IJCNLP 2019, and is currently Chairperson of ANLP, Japan.
Contents
Explainability is a crucial component of natural language processing (NLP) systems. Explanation is communication, and research on explainability is expected to address notions such as communicative goals and common grounding. Automated Writing Evaluation (AWE) is an ideal field to contribute to such research. AWE refers to NLP tasks designed to support human learners by evaluating the quality of texts produced in educational contexts, such as written answers to questions, argumentative essays, etc., and providing constructive feedback. Since explanations are vital in educational assessment, pedagogical literature is rich with insights that can facilitate research on explainability in AWE systems. This talk will overview recent research trends, explore what style of explanations are pedagogically suitable and technologically feasible at each layer of language production, ranging from content planning to surface realization, and identify open research issues to encourage researchers to enter this emerging field.

COLING 2022 The 29th International Conference on Computational Linguistics