Webinars series
Pascale Fung (The Hong Kong University of Science and Technology) Safer Generative ConvAI (Thursday, June 1, 2023 - 15:00 CET) Summary: Generative models for Conversational AI are less than a decade old, but they hold great promise for human-machine interactions. Machine responses based on generative models can seem quite fluent and human-like, empathetic and funny, knowledgeable and professional. However, behind the confident voice of generative ConvAI systems, they can also be hallucinating misinformation, giving biased and harmful views, and are still not "safe" enough for many real life applications. The expressive power of generative ConvAI models and their undesirable behavior are two sides of the same coin. How can we harness the fluency, diversity, engagingness of generative ConvAI models while mitigating the downside? In this talk, I will present some of our team’s recent work in making generative ConvAI safer via mitigating hallucinations, misinformation, and toxicity. Bio: Pascale Fung is a Chair Professor at the Department of Electronic & Computer Engineering at The Hong Kong University of Science & Technology (HKUST), and a visiting professor at the Central Academy of Fine Arts in Beijing. She is an elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) for her "significant contributions to the field of conversational AI and to the development of ethical AI principles and algorithms", an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans”. She is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions” and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. She is the Director of HKUST Centre for AI Research (CAiRE). She was the founding chair of the Women Faculty Association at HKUST. She is an expert on the Global Future Council, a think tank for the World Economic Forum. She represents HKUST on Partnership on AI to Benefit People and Society. She is on the Board of Governors of the IEEE Signal Processing Society. She is a member of the IEEE Working Group to develop an IEEE standard - Recommended Practice for Organizational Governance of Artificial Intelligence. Her research team has won several best and outstanding paper awards at ACL, ACL and NeurIPS workshops. |
|
Martin Cooke (Ikerbasque – Basque Foundation for Science) Who needs big data? Listeners' adaptation to extreme forms of variability in speech (Thursday, May 4, 2023 - 15:00 CET) Summary: No theory of speech perception can be considered complete without an explanation of how listeners are able to extract meaning from severely degraded forms of speech. Starting with a brief overview of a century of research which has seen the development of many types of distorted speech, followed by some anecdotal evidence that automatic speech recognisers still have some way to go to match listeners' performance in this area, I will describe the outcome of one recent [1] and several ongoing studies into the detailed time course of a listener's response to distorted speech. These studies variously consider the rapidity of adaptation, whether adaptation can only proceed if words are recognised, the degree to which the response to one form of distortion is conditioned on prior experience with other forms, and the nature of adaptation in a language other than one's own native tongue. Taken together, findings from these experiments suggest that listeners are capable of continuous and extremely rapid adaptation to novel forms of speech that differ greatly from the type of input that makes up the vast bulk of their listening experience. It is an open question as to whether big-data-based automatic speech recognition can offer a similar degree of flexibility. [1] Cooke, M, Scharenborg, O and Meyer, B (2022). The time course of adaptation to distorted speech. J. Acoust. Soc. Am. 151, 2636-2646. 10.1121/10.0010235 Bio: Martin Cooke is Ikerbasque Research Professor. After starting his career in the UK National Physical Laboratory, he worked at the University of Sheffield for 26 years before taking up his current position. His research has focused on analysing the computational auditory scene, devising algorithms for robust automatic speech recognition and investigating human speech perception. His interests also include the effects of noise on talkers as well as listeners, and second language listening in noise. |
|
Isabelle Augenstein (University of Copenhagen) Beyond Fact Checking — Modelling Information Change in Scientific Communication (Thursday, March 2, 2023 - 15:00 CET) Summary: Most work on scholarly document processing assumes that the information processed is trustworthy and factually correct. However, this is not always the case. There are two core challenges, which should be addressed: 1) ensuring that scientific publications are credible -- e.g. that claims are not made without supporting evidence, and that all relevant supporting evidence is provided; and 2) that scientific findings are not misrepresented, distorted or outright misreported when communicated by journalists or the general public. In this talk, I will present some first steps towards addressing these problems, discussing our research on exaggeration detection, scientific fact checking, and on modelling information change in scientific communication more broadly. Bio: Isabelle Augenstein is a Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. Her main research interests are fact checking, low-resource learning, and explainability. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield. In October 2022, Isabelle Augenstein became Denmark’s youngest ever female full professor. She currently holds a prestigious ERC Starting Grant on 'Explainable and Robust Automatic Fact Checking', as well as the Danish equivalent of that, a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media’. She is a member of the Young Royal Danish Academy of Sciences and Letters, and Vice President-Elect of SIGDAT, which organises the EMNLP conference series. |
|
Thomas Hueber (CNRS/GIPSA-lab) Computational model of speech learning, a focus on the acoustic-articulatory mapping (Thursday, February 2, 2023 - 15:00 CET) Summary: Speech production is a complex motor process involving several physiological phenomena, such as the neural, nervous and muscular activities that drive our respiratory, laryngeal and articulatory movements. Modeling speech production, in particular the relationship between articulatory gestures (tongue, lips, jaw, velum) and acoustic realizations of speech, is a challenging, and still evolving, research question. From an applicative point of view, such models could be embedded into assistive devices able to restore oral communication when part of the speech production chain is damaged (articulatory synthesis, silent speech interface). They could also help rehabilitate speech sound disorders using a therapy based on biofeedback (and articulatory inversion). From a more fundamental research perspective, such models can also be used to question the cognitive mechanisms underlying speech learning, perception and motor control. In this talk, I will present three recent studies conducted in our group to address some of these fundamental questions. In the first one, we quantified the benefit of relying on lip movement when learning speech representations in a self-supervised manner using predictive coding techniques. In the second one, we integrated articulatory priors into the latent space of a variational auto-encoder, with potential application to speech enhancement. In the third one, I will describe a first attempt toward a computational model of speech learning, based on deep learning, which can be used to understand how a child learns the acoustic-to-articulatory inverse mapping in a self-supervised manner. Bio: Thomas Hueber is a senior research scientist at CNRS (« Directeur de recherche ») working at GIPSA-lab in Grenoble, France. He is head of the CRISSP research team (cognitive robotics, interactive systems and speech processing). He holds a Ph.D. in Computer Science from Pierre and Marie Curie University (Paris) in 2009. His research activities focus on automatic speech processing, with a particular interest in (1) the capture, analysis and modeling of articulatory gestures and electrophysiological signals involved in its production, (2) the development of speech technologies that exploit these different signals, for speech recognition and synthesis, for people with a spoken communication disorder, and (3) the study, through modeling and simulation, of the cognitive mechanisms underlying speech perception and production. He received in 2011 the 6th Christian Benoit award (ISCA/AFCP/ACB) and in 2015 the ISCA Award for the best paper published in Speech Communication. In 2017, he co-edited in IEEE/ACM Trans. Audio Speech and Language Processing, a special issue on Biosignal-based speech processing. He is also associate editor of EURASIP Journal on Audio, Speech, and Music Processing. |
|
Maarit Koponen (University of Eastern Finland) Machine translation as a tool for multilingual information: different users and use scenarios (Thursday, December 1, 2022 - 15:00 CET) Summary: Recent advances in machine translation quality have improved its usefulness as a tool to satisfy the demand for multilingual information and communication. Machine translation is nowadays a common part of professional translation workflows, but it is not a tool exclusive to translators. Users of machine translation can be found, for example, in public service institutions and newsrooms looking to produce and disseminate information in multiple languages. At the same time, machine translation can also offer a way for people to access information that may not otherwise be available in their language. Effective and responsible use of machine translation, however, requires a clear understanding of the potential risks as well as potential benefits. In this talk, I discuss how machine translation is used for producing and accessing information and how various situational factors affect its use in different scenarios. Bio: Dr Maarit Koponen currently works as Professor of Translation Studies at the University of Eastern Finland. She has previously worked as a post-doctoral researcher at the University of Helsinki and as a lecturer at the University of Turku after receiving her PhD in Language Technology at the University of Helsinki in 2016. Her research focuses on translation technology, particularly machine translation, and the effect of technology on translation both in professional and non-professional settings. Starting in October 2022, Koponen leads a work package focusing on linguistic barriers to information accessibility and technological solutions as part of the research project DECA (Democratic epistemic capacities in the age of algorithms), funded by the Academy of Finland Strategic Research Council. She chairs Working Group 7 “Language work, language professionals” of the EU COST Action “Language in the Human-Machine Era” (LITHME). She has also worked as a professional translator for several years. |
|
Vered Shwartz (The University of British Columbia-Vancouver) Incorporating Commonsense Reasoning into NLP Models (Thursday, November 3, 2022 - 15:30 CET) Summary: NLP models are primarily supervised, and are by design trained on a sample of the situations they may encounter in practice. The ability of models to generalize to and address unknown situations reasonably is limited, but may be improved by endowing models with commonsense knowledge and reasoning skills. In this talk, I will present several lines of work in which commonsense is used for improving the performance of NLP tasks: for completing missing knowledge in underspecified language, interpreting figurative language, and resolving context-sensitive event coreference. Finally, I will discuss open problems and future directions in building NLP models with commonsense reasoning abilities. Bio: Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia and a faculty member at the Vector Institute for Artificial Intelligence. Her research interests include commonsense reasoning, computational semantics and pragmatics, and multiword expressions. Previously, Vered was a postdoctoral researcher at the Allen Institute for AI (AI2) and the University of Washington, and received her PhD in Computer Science from Bar-Ilan University. |
|
Xiang Ren (University of Southern California - USC) Commonsense Reasoning in the Wild (Thursday, October 6, 2022 - 17:00 CET) Summary: Current NLP systems impress us by achieving close-to-human performance on benchmarks of answering commonsense questions or writing interesting stories. However, most of the progress is evaluated using static, closed-ended datasets created for individual tasks. To deploy commonsense reasoning services in the wild, we look to develop and evaluate systems that can generate answers in an open-ended way, perform robust logical reasoning, and generalize across diverse task formats, domains, and datasets. In this talk I will share our effort on introducing new formulations of commonsense reasoning challenges and novel evaluation protocols, towards broadening the scope in approaching machine common sense. We hope that such a shift of evaluation paradigm would encourage more research on externalizing the model reasoning process and improving model robustness and cross-task generalization. Bio: Xiang Ren is an assistant professor and Viterbi Early Career Chair at the USC Computer Science Department, a Research Team Leader at USC ISI, and the director of the Intelligence and Knowledge Discovery (INK) Lab at USC. Priorly, he spent time as a research scholar at Stanford University and received his Ph.D. in Computer Science from the University of Illinois Urbana-Champaign. Ren's research seeks to build generalizable natural language processing (NLP) systems which can handle a wide variety of language tasks and situations. He works on new algorithms and datasets to make NLP systems cheaper to develop and maintain, arm machine models with common sense, and improve models’ transparency and reliability to build user trust. His research work has received several best paper awards in top NLP and AI conference venues. Ren has been awarded a NSF CAREER Award, multiple faculty research awards from Google, Facebook, Amazon, JP Morgan and Sony, and the 2018 ACM SIGKDD Doctoral Dissertation Award. He was named Forbes' Asia 30 Under 30 in 2019. |