Angeliki Lazaridou

  • Staff Research Scientist
  • DeepMind

On opportunities and challenges on communicating using Large Language Models

 

Abstract:  From science fiction to Turing’s seminal work on AI, language and communication have been among the central components of intelligent agents. Towards that dream, the new-generation of large language models (LLMs) have recently given rise to a new set of impressive capabilities, from generating human-like text to engaging in simple, few-turn conversations. So, how close do LLMs bring us to being able to interact with such intelligent agents during our lifetime? In this talk, I will review key recent developments on LLMs by the community and I will discuss these in the context of advancing communication research. At the same time, I will also highlight challenges of current models in producing goal-driven, safe and factual dialogues. Capitalizing on their strengths and addressing their weaknesses might allow us to unlock LLMs full potential in responsibly interacting with us, humans, about different aspects of our lives.  

Bio:  Angeliki Lazaridou is a Staff Research Scientist at DeepMind. She received a PhD in Brain and Cognitive Sciences from the University of Trento. Her PhD initially focused on developing neural network models and techniques for teaching agents language in grounded environments. However, one day in late 2015, while walking towards the lab she realized that interaction and communication should play a key role in this learning 💡. This was the beginning of her work in deep learning and multi-agent communication. In the following years, she looked at this fascinating problem from many different angles: how to make this learning more realistic or how to extend findings from cooperative to self-agents and even how to make this communication resemble more natural language. Currently, she spends most of her time thinking and working on how to best make language models be in sync with the complex and ever-evolving world.

Sessions