Giuseppe Carenini
- Professor
- University of British Columbia
Unlimited discourse structures in the era of distant supervision, pre-trained language models and autoencoders
Abstract: Historically, discourse processing relies on human annotated corpora that are very small and lack diversity, often leading to overfitting, poor performance in domain transfer, and minimal success of modern deep-learning solutions. So, wouldn’t it be great if we could generate an unlimited amount of discourse structures for both monologues and dialogues, across genres, without involving human annotation? In this talk, I will present some preliminary results on possible strategies to achieve this goal: by either leveraging natural text annotations (like sentiment and summaries), by extracting discourse information from pre-trained and fine-tuned language models, or by inducing discourse trees from task-agnostic autoencoding learning objectives. Besides the many remaining challenges and open issues, I will discuss the potential of these novel approaches not only to boost the performance of discourse parsers (NLU) and text planners (NLG), but also lead to more explanatory and useful data-driven theories of discourse.
Sessions
-
SIGDIAL Day 3 Friday 9th September
SIGDIAL Day 3 Programme for Friday 9th September