Main Conference Keynotes

We are delighted to announce that the esteemed speakers listed below have graciously accepted our invitation to deliver keynote speeches at the main conference of EMNLP 2024:

Tuesday, November 12, James Knight Center, Time: 09:30 - 10:30: Percy Liang

Gary Marcus

Open-Source and Science in the Era of Foundation Models

As capabilities of foundation models skyrocket, openness plummets. In this talk, I argue that open-source models are essential for the long-term goal of building a rigorous foundation for AI. Greater access—from API to open-weight to open-source—enables deeper forms of research. API access allows us to push the frontier of agents, and I will present our recent work on simulation and problem-solving agents. Open weights enables reproducible research on safety, interpretability, and more generally, “model forensics”. Open-source unlocks fundamental innovations in architectures, training procedures, and data curation methods. Of course, the key obstacle for building open-source models is the resources required (data, compute, and research/engineering). I will conclude with some promising directions that leverage the community that bring us closer to the vision of open-source foundation models.

Bio

Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011) and the director of the Center for Research on Foundation Models (CRFM). He is currently focused on making foundation models (in particular, language models) more accessible through open-source and understandable through rigorous benchmarking. In the past, he has worked on many topics centered on machine learning and natural language processing, including robustness, interpretability, human interaction, learning theory, grounding, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and paper awards at ACL, EMNLP, ICML, COLT, ISMIR, CHI, UIST, and RSS.

Wednesday, November 13, James Knight Center, Time: 09:00 - 10:00: Anca Dragan

Neil Cohn

My Journey in AI Safety and Alignment

For nearly a decade now, the problem that has been top of mind for me is how we might enable AI systems to robustly optimize for what people want, and to avoid causing harm – from robots and self-driving cars, to assistive devices and deep brain stimulation, to theory and toy models, to large language models and now Gemini. In this talk, I’ll take the opportunity to share a bit about my journey in this space, what lessons I’ve learned, and how we’re approaching the safety and alignment of frontier models at Google DeepMind.

Bio

Anca Dragan is an Associate Professor in the EECS Department at UC Berkeley, currently on leave to head AI Safety and Alignment at Google DeepMind. The goal of her research at UC Berkeley has been to enable AI agents (from robots to cars to LLMs to recommender systems) to work with, around, and in support of people. Anca runs the InterACT Lab, where they focus on algorithms for human-AI and human-robot interaction. One of the core problems the Lab has worked on since its inception is AI alignment: getting AI agents to do what people actually want – this has meant learning reward functions interactively, from diverse human feedback forms, across different modalities, while maintaining uncertainty. They have also contributed to algorithms for human-AI collaboration and coordination, like agents fluently working together with human-driven avatars in games, assistance and adaption in brain-machine interfaces, and autonomous cars sharing the road with human drivers. At Google DeepMind, Anca currently leads a collection of teams responsible for both the safety of the current Gemini models, and preparing for Gemini capabilities to keep advancing and ensuring that safety advances hand-in-hand. This means ensuring Gemini models are and will be aligned with human goals and values, including avoiding present-day harms and catastrophic risks, enabling models to better and more robustly understand human preferences, enabling informed oversight, increasing robustness to adversarial attacks, and accounting for the plurality of human values and viewpoints. Previously, she helped found and serve on the steering committee for the Berkeley AI Research (BAIR) Lab. Anca has been (and still is) a co-PI of the Center for Human-Compatible AI. She has consulted for Waymo for the past 6 years, helping with the roadmap for how to deploy an increasingly learning-based safety-critical system. She’s been honored by the Sloan Fellowship, MIT TR35, the Okawa award, an NSF CAREER award, and the PECASE award. Anca takes the most pride in my former students, who have gone on to faculty positions at MIT, Stanford, CMU, and Princeton, and to industry positions at Google DeepMind, Waymo, and Meta.

Thursday, November 14, James Knight Center, Time: 09:00 - 10:00: Tom Griffiths

Mona Diab

Bayes in the age of intelligent machines

Recent rapid progress in the creation of artificial intelligence (AI) systems has been driven in large part by innovations in architectures and algorithms for developing large scale artificial neural networks. As a consequence, it’s natural to ask what role abstract principles of intelligence — such as Bayes’ rule — might play in developing intelligent machines. In this talk, I will argue that there is a new way in which Bayes can be used in the context of AI, more akin to how it is used in cognitive science: providing an abstract description of how agents should solve certain problems and hence a tool for understanding their behavior. This new role is motivated in large part by the fact that we have succeeded in creating intelligent systems that we do not fully understand, making the problem for the machine learning researcher more closely parallel that of the cognitive scientist. I will talk about how this perspective can help us think about making machines with better informed priors about the world and give us insight into their behavior by directly creating cognitive models of neural networks.

Bio

Tom Griffiths is the Henry R. Luce Professor of Information Technology, Consciousness and Culture in the Departments of Psychology and Computer Science at Princeton University, where he is also the Director of the Princeton Laboratory for Artificial Intelligence. His research explores connections between human and machine learning, using ideas from statistics and artificial intelligence to understand how people solve the challenging computational problems they encounter in everyday life. Tom completed his PhD in Psychology at Stanford University in 2005, and taught at Brown University and the University of California, Berkeley before moving to Princeton. He has received awards for his research from organizations ranging from the American Psychological Association to the National Academy of Sciences and is a co-author of the book Algorithms to Live By, introducing ideas from computer science and cognitive science to a general audience.