Skip to main content
eScholarship
Open Access Publications from the University of California

Event Segmentation In Story Listening Using Deep Language Models

Creative Commons 'BY' version 4.0 license
Abstract

Event segmentation theory posits that we segment continuous experience into discrete events and event boundaries occur at large transient increases in prediction error, often related to context changes. Identifying event boundaries a priori has been difficult in naturalistic settings. To overcome this challenge for story listening, we used a deep learning language model (GPT2) to compute the predicted probability distributions of the next word, at each point in the story. For 3 stories, we computed the surprise, the entropy, and the Kullback-Leibler divergence (KLD). We then asked participants to listen to these stories while marking event boundaries. We used regression models to compare the GPT2 measures and the human segmentation data. Preliminary results indicate that event boundaries are associated with transient increases in KLD. This supports the hypothesis that prediction error serves as a control mechanism governing event segmentation, and points to important differences between operational definitions of prediction error.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View