Free

NLP With Friends, featured friend: Naomi Saphra

Event Information

Share this event

Date and Time

Location

Location

Online Event

Event description
Learning Dynamics of LSTM Language Models

About this Event

NLP with Friends seminar series (Website/Twitter)

Speaker: Naomi Saphra, University of Edinburgh

Title: Learning Dynamics of LSTM Language Models

Abstract

When people ask why a neural network is so effective at solving some task, some researchers mean, "How does training impose bias towards effective representations?" This approach can lead to inspecting loss landscapes, analyzing convergence, or identifying critical learning periods. Other researchers mean, "How does this model represent linguistic structure?" This approach can lead to model probing, testing on challenge sets, or inspecting attention distributions. The work in this talk instead considers the question, "How does training impose bias towards linguistic structure?" I will propose a new method for analyzing hierarchical behavior in LSTMs, and apply it in synthetic experiments to illustrate how LSTMs learn like classical parsers.

Bio

Naomi Saphra is a PhD student at University of Edinburgh, studying the emergence of linguistic structure during neural network training: the intersection of linguistics, interpretability, formal grammars, and optimization. They have a surprising number of awards and honors, but all are related to an accident of nature that has limited their ability to type and write. They have been called "the Bill Hicks of radical AI-critical comedy" exactly once.

One line summary

If you want to understand how machines work, ask how they learn.

Share with friends

Date and Time

Location

Online Event

Save This Event

Event Saved